repo_name
stringlengths 8
38
| pr_number
int64 3
47.1k
| pr_title
stringlengths 8
175
| pr_description
stringlengths 2
19.8k
⌀ | author
null | date_created
stringlengths 25
25
| date_merged
stringlengths 25
25
| filepath
stringlengths 6
136
| before_content
stringlengths 54
884k
⌀ | after_content
stringlengths 56
884k
| pr_author
stringlengths 3
21
| previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| comment
stringlengths 2
25.4k
| comment_author
stringlengths 3
29
| __index_level_0__
int64 0
5.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
scikit-learn-contrib/category_encoders | 320 | Check array index fix | Closes #280.
Fixes #272, probably also #290, and supersedes #304.
## Proposed Changes
Replaces consecutive calls to `convert_input` (on `X`) and `convert_input_vector` (on `y`) by a single `convert_inputs` to ensure that the indexes of the results match. This is necessary for proper functioning of encoders that group `y` by values of `X`, and convenient otherwise.
I don't like that `convert_inputs` is one character away from `convert_input`; other suggestions welcomed. One _could_ convert all remaining `convert_input` calls to `convert_inputs` with the default `y=None`, so that `convert_input` would join `convert_input_vector` as only used inside `convert_inputs`.
I've also reduced the places where `y` gets cast to float, including that casting only when needed (in glmm where `statsmodels` would complain otherwise, and quantile where `numpy.quantile` would complain otherwise).
And since `convert_input` has a deep-copy option, I've consolidated a few of the copies into the `convert_inputs`; there are others that I've not consolidated, mostly because the copy happens further away in the code.
I'm not sure what needs to be done for a repository to "participate" in [Hacktoberfest](https://hacktoberfest.digitalocean.com/), but if it's as simple as a maintainer adding a label `hacktoberfest-approved` to the PR, I'd appreciate that. | null | 2021-10-24 21:33:05+00:00 | 2021-10-29 15:40:38+00:00 | tests/test_utils.py | from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input_vector([0, 1, 0], index) # list
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0, 1, 0]], index) # list of lists (row)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0], [1], [0]], index) # list of lists (column)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([1, 0, 1]), index) # np vector
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1, 0, 1]]), index) # np matrix row
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1], [0], [1]]), index) # np matrix column
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(pd.Series([0, 1, 0], index=[4, 5, 6]), index) # series
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector(pd.DataFrame({'y': [0, 1, 0]}, index=[4, 5, 6]), index) # dataFrame
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector((0, 1, 0), index) # tuple
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(0, [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
result = convert_input_vector('a', [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
# multiple columns and rows should cause an error because it is unclear which column/row to use as the target
self.assertRaises(ValueError, convert_input_vector, (pd.DataFrame({'col1': [0, 1, 0], 'col2': [1, 0, 1]})), index)
self.assertRaises(ValueError, convert_input_vector, (np.array([[0, 1], [1, 0], [0, 1]])), index)
self.assertRaises(ValueError, convert_input_vector, ([[0, 1], [1, 0], [0, 1]]), index)
# edge scenarios (it is ok to raise an exception but please, provide then a helpful exception text)
_ = convert_input_vector(pd.Series(dtype=float), [])
_ = convert_input_vector([], [])
_ = convert_input_vector([[]], [])
_ = convert_input_vector(pd.DataFrame(), [])
| from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector, convert_inputs
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input_vector([0, 1, 0], index) # list
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0, 1, 0]], index) # list of lists (row)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0], [1], [0]], index) # list of lists (column)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([1, 0, 1]), index) # np vector
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1, 0, 1]]), index) # np matrix row
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1], [0], [1]]), index) # np matrix column
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(pd.Series([0, 1, 0], index=[4, 5, 6]), index) # series
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector(pd.DataFrame({'y': [0, 1, 0]}, index=[4, 5, 6]), index) # dataFrame
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector((0, 1, 0), index) # tuple
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(0, [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
result = convert_input_vector('a', [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
# multiple columns and rows should cause an error because it is unclear which column/row to use as the target
self.assertRaises(ValueError, convert_input_vector, (pd.DataFrame({'col1': [0, 1, 0], 'col2': [1, 0, 1]})), index)
self.assertRaises(ValueError, convert_input_vector, (np.array([[0, 1], [1, 0], [0, 1]])), index)
self.assertRaises(ValueError, convert_input_vector, ([[0, 1], [1, 0], [0, 1]]), index)
# edge scenarios (it is ok to raise an exception but please, provide then a helpful exception text)
_ = convert_input_vector(pd.Series(dtype=float), [])
_ = convert_input_vector([], [])
_ = convert_input_vector([[]], [])
_ = convert_input_vector(pd.DataFrame(), [])
def test_convert_inputs(self):
aindex = [2, 4, 5]
bindex = [1, 3, 4]
alist = [5, 3, 6]
aseries = pd.Series(alist, aindex)
barray = np.array([[7, 9], [4, 3], [0, 1]])
bframe = pd.DataFrame(barray, bindex)
X, y = convert_inputs(barray, alist)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == [0, 1, 2])
X, y = convert_inputs(barray, alist, index=aindex)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == aindex)
X, y = convert_inputs(barray, aseries, index=bindex)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == aindex)
X, y = convert_inputs(bframe, alist, index=[3, 1, 4])
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == bindex)
self.assertRaises(ValueError, convert_inputs, bframe, aseries)
# shape mismatch
self.assertRaises(ValueError, convert_inputs, barray, [1, 2, 3, 4])
| bmreiniger | 866bf143fb71db0de60d32e608393c1a3b8a71a7 | cc0c4b9ab66a52979b37f791836bea1241046b8c | why are you testing `convert_input_vector` here? shouldn't you rather test if an error is thrown by `convert_inputs` if indices of `X` and `y` differ? | PaulWestenthanner | 136 |
scikit-learn-contrib/category_encoders | 320 | Check array index fix | Closes #280.
Fixes #272, probably also #290, and supersedes #304.
## Proposed Changes
Replaces consecutive calls to `convert_input` (on `X`) and `convert_input_vector` (on `y`) by a single `convert_inputs` to ensure that the indexes of the results match. This is necessary for proper functioning of encoders that group `y` by values of `X`, and convenient otherwise.
I don't like that `convert_inputs` is one character away from `convert_input`; other suggestions welcomed. One _could_ convert all remaining `convert_input` calls to `convert_inputs` with the default `y=None`, so that `convert_input` would join `convert_input_vector` as only used inside `convert_inputs`.
I've also reduced the places where `y` gets cast to float, including that casting only when needed (in glmm where `statsmodels` would complain otherwise, and quantile where `numpy.quantile` would complain otherwise).
And since `convert_input` has a deep-copy option, I've consolidated a few of the copies into the `convert_inputs`; there are others that I've not consolidated, mostly because the copy happens further away in the code.
I'm not sure what needs to be done for a repository to "participate" in [Hacktoberfest](https://hacktoberfest.digitalocean.com/), but if it's as simple as a maintainer adding a label `hacktoberfest-approved` to the PR, I'd appreciate that. | null | 2021-10-24 21:33:05+00:00 | 2021-10-29 15:40:38+00:00 | tests/test_utils.py | from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input_vector([0, 1, 0], index) # list
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0, 1, 0]], index) # list of lists (row)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0], [1], [0]], index) # list of lists (column)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([1, 0, 1]), index) # np vector
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1, 0, 1]]), index) # np matrix row
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1], [0], [1]]), index) # np matrix column
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(pd.Series([0, 1, 0], index=[4, 5, 6]), index) # series
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector(pd.DataFrame({'y': [0, 1, 0]}, index=[4, 5, 6]), index) # dataFrame
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector((0, 1, 0), index) # tuple
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(0, [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
result = convert_input_vector('a', [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
# multiple columns and rows should cause an error because it is unclear which column/row to use as the target
self.assertRaises(ValueError, convert_input_vector, (pd.DataFrame({'col1': [0, 1, 0], 'col2': [1, 0, 1]})), index)
self.assertRaises(ValueError, convert_input_vector, (np.array([[0, 1], [1, 0], [0, 1]])), index)
self.assertRaises(ValueError, convert_input_vector, ([[0, 1], [1, 0], [0, 1]]), index)
# edge scenarios (it is ok to raise an exception but please, provide then a helpful exception text)
_ = convert_input_vector(pd.Series(dtype=float), [])
_ = convert_input_vector([], [])
_ = convert_input_vector([[]], [])
_ = convert_input_vector(pd.DataFrame(), [])
| from unittest import TestCase # or `from unittest import ...` if on Python 3.4+
from category_encoders.utils import convert_input_vector, convert_inputs
import pandas as pd
import numpy as np
class TestUtils(TestCase):
def test_convert_input_vector(self):
index = [2, 3, 4]
result = convert_input_vector([0, 1, 0], index) # list
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0, 1, 0]], index) # list of lists (row)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector([[0], [1], [0]], index) # list of lists (column)
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([1, 0, 1]), index) # np vector
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1, 0, 1]]), index) # np matrix row
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(np.array([[1], [0], [1]]), index) # np matrix column
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(pd.Series([0, 1, 0], index=[4, 5, 6]), index) # series
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector(pd.DataFrame({'y': [0, 1, 0]}, index=[4, 5, 6]), index) # dataFrame
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [4, 5, 6], 'We want to preserve the original index')
result = convert_input_vector((0, 1, 0), index) # tuple
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(3, len(result))
np.testing.assert_array_equal(result.index, [2, 3, 4])
result = convert_input_vector(0, [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
result = convert_input_vector('a', [2]) # scalar
self.assertTrue(isinstance(result, pd.Series))
self.assertEqual(1, len(result))
self.assertTrue(result.index == [2])
# multiple columns and rows should cause an error because it is unclear which column/row to use as the target
self.assertRaises(ValueError, convert_input_vector, (pd.DataFrame({'col1': [0, 1, 0], 'col2': [1, 0, 1]})), index)
self.assertRaises(ValueError, convert_input_vector, (np.array([[0, 1], [1, 0], [0, 1]])), index)
self.assertRaises(ValueError, convert_input_vector, ([[0, 1], [1, 0], [0, 1]]), index)
# edge scenarios (it is ok to raise an exception but please, provide then a helpful exception text)
_ = convert_input_vector(pd.Series(dtype=float), [])
_ = convert_input_vector([], [])
_ = convert_input_vector([[]], [])
_ = convert_input_vector(pd.DataFrame(), [])
def test_convert_inputs(self):
aindex = [2, 4, 5]
bindex = [1, 3, 4]
alist = [5, 3, 6]
aseries = pd.Series(alist, aindex)
barray = np.array([[7, 9], [4, 3], [0, 1]])
bframe = pd.DataFrame(barray, bindex)
X, y = convert_inputs(barray, alist)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == [0, 1, 2])
X, y = convert_inputs(barray, alist, index=aindex)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == aindex)
X, y = convert_inputs(barray, aseries, index=bindex)
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == aindex)
X, y = convert_inputs(bframe, alist, index=[3, 1, 4])
self.assertTrue(isinstance(X, pd.DataFrame))
self.assertTrue(isinstance(y, pd.Series))
self.assertEqual((3, 2), X.shape)
self.assertEqual(3, len(y))
self.assertTrue(list(X.index) == list(y.index) == bindex)
self.assertRaises(ValueError, convert_inputs, bframe, aseries)
# shape mismatch
self.assertRaises(ValueError, convert_inputs, barray, [1, 2, 3, 4])
| bmreiniger | 866bf143fb71db0de60d32e608393c1a3b8a71a7 | cc0c4b9ab66a52979b37f791836bea1241046b8c | oh, oops, absolutely | bmreiniger | 137 |
scikit-learn-contrib/category_encoders | 303 | Quantile encoder | This PR (#302), implements two methods from a recently published paper at a conference (MDAI 2021).
> [Quantile Encoder: Tackling High Cardinality Categorical Features in Regression Problems (Carlos Mougan, David Masip, Jordi Nin, Oriol Pujol) ](https://arxiv.org/abs/2105.13783)
Encoding methods, full technical development can be followed in the paper:
- Quantile Encoder
Tests are implemented and passed.
Scikit learn API semantics
Docs is extended
If I missed something or any comments, please let me know :)
| null | 2021-05-31 06:10:53+00:00 | 2021-10-20 06:44:04+00:00 | docs/source/index.rst | .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding categorical variables into numeric with different
techniques. While ordinal, one-hot, and hashing encoders have similar equivalents in the existing scikit-learn version, the
transformers in this library all share a few useful properties:
* First-class support for pandas dataframes as an input (and optionally as output)
* Can explicitly configure which columns in the data are encoded by name or index, or infer non-numeric columns regardless of input type
* Can drop any columns with very low variance based on training set optionally
* Portability: train a transformer on data, pickle it, reuse it later and get the same thing out.
* Full compatibility with sklearn pipelines, input an array-like dataset like any other transformer
Usage
-----
install as:
.. code-block:: python
pip install category_encoders
or
.. code-block:: python
conda install -c conda-forge category_encoders
To use:
.. code-block:: python
import category_encoders as ce
encoder = ce.BackwardDifferenceEncoder(cols=[...])
encoder = ce.BaseNEncoder(cols=[...])
encoder = ce.BinaryEncoder(cols=[...])
encoder = ce.CatBoostEncoder(cols=[...])
encoder = ce.CountEncoder(cols=[...])
encoder = ce.GLMMEncoder(cols=[...])
encoder = ce.HashingEncoder(cols=[...])
encoder = ce.HelmertEncoder(cols=[...])
encoder = ce.JamesSteinEncoder(cols=[...])
encoder = ce.LeaveOneOutEncoder(cols=[...])
encoder = ce.MEstimateEncoder(cols=[...])
encoder = ce.OneHotEncoder(cols=[...])
encoder = ce.OrdinalEncoder(cols=[...])
encoder = ce.SumEncoder(cols=[...])
encoder = ce.PolynomialEncoder(cols=[...])
encoder = ce.TargetEncoder(cols=[...])
encoder = ce.WOEEncoder(cols=[...])
encoder.fit(X, y)
X_cleaned = encoder.transform(X_dirty)
All of these are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. If
the cols parameter isn't passed, every non-numeric column will be converted. See below for detailed documentation
Contents:
.. toctree::
:maxdepth: 3
backward_difference
basen
binary
catboost
count
glmm
hashing
helmert
jamesstein
leaveoneout
mestimate
onehot
ordinal
polynomial
sum
targetencoder
woe
wrapper
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| .. Category Encoders documentation master file, created by
sphinx-quickstart on Sat Jan 16 13:08:19 2016.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Category Encoders
=================
A set of scikit-learn-style transformers for encoding categorical variables into numeric with different
techniques. While ordinal, one-hot, and hashing encoders have similar equivalents in the existing scikit-learn version, the
transformers in this library all share a few useful properties:
* First-class support for pandas dataframes as an input (and optionally as output)
* Can explicitly configure which columns in the data are encoded by name or index, or infer non-numeric columns regardless of input type
* Can drop any columns with very low variance based on training set optionally
* Portability: train a transformer on data, pickle it, reuse it later and get the same thing out.
* Full compatibility with sklearn pipelines, input an array-like dataset like any other transformer
Usage
-----
install as:
.. code-block:: python
pip install category_encoders
or
.. code-block:: python
conda install -c conda-forge category_encoders
To use:
.. code-block:: python
import category_encoders as ce
encoder = ce.BackwardDifferenceEncoder(cols=[...])
encoder = ce.BaseNEncoder(cols=[...])
encoder = ce.BinaryEncoder(cols=[...])
encoder = ce.CatBoostEncoder(cols=[...])
encoder = ce.CountEncoder(cols=[...])
encoder = ce.GLMMEncoder(cols=[...])
encoder = ce.HashingEncoder(cols=[...])
encoder = ce.HelmertEncoder(cols=[...])
encoder = ce.JamesSteinEncoder(cols=[...])
encoder = ce.LeaveOneOutEncoder(cols=[...])
encoder = ce.MEstimateEncoder(cols=[...])
encoder = ce.OneHotEncoder(cols=[...])
encoder = ce.OrdinalEncoder(cols=[...])
encoder = ce.SumEncoder(cols=[...])
encoder = ce.PolynomialEncoder(cols=[...])
encoder = ce.TargetEncoder(cols=[...])
encoder = ce.WOEEncoder(cols=[...])
encoder = ce.QuantileEncoder(cols=[...])
encoder.fit(X, y)
X_cleaned = encoder.transform(X_dirty)
All of these are fully compatible sklearn transformers, so they can be used in pipelines or in your existing scripts. If
the cols parameter isn't passed, every non-numeric column will be converted. See below for detailed documentation
Contents:
.. toctree::
:maxdepth: 3
backward_difference
basen
binary
catboost
count
glmm
hashing
helmert
jamesstein
leaveoneout
mestimate
onehot
ordinal
polynomial
sum
targetencoder
woe
wrapper
quantile
summary
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
| cmougan | d85c9c5fe1e68e05c92680631c31ff8cfc5505c5 | 66d89c216e14f919cec7437b2c9b0a2850f698ce | the summary encoder is only in sktools and not here | PaulWestenthanner | 138 |
mit-han-lab/bevfusion | 156 | [debug] fix bug for nuscenes dataset | [debug] fix bug for nuscenes dataset
maybe the 'sample_idx' is needed anyway, due to the same error happened here #11 #16. | null | 2022-09-29 09:41:51+00:00 | 2022-10-10 16:12:11+00:00 | mmdet3d/datasets/nuscenes_dataset.py | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from .custom_3d import Custom3DDataset
@DATASETS.register_module()
class NuScenesDataset(Custom3DDataset):
r"""NuScenes Dataset.
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `NuScenes Dataset <https://www.nuscenes.org/download>`_
for data downloading.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
dataset_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
with_velocity (bool, optional): Whether include velocity prediction
into the experiments. Defaults to True.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes.
- 'LiDAR': Box in LiDAR coordinates.
- 'Depth': Box in depth coordinates, usually for indoor dataset.
- 'Camera': Box in camera coordinates.
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
eval_version (bool, optional): Configuration version of evaluation.
Defaults to 'detection_cvpr_2019'.
use_valid_flag (bool): Whether to use `use_valid_flag` key in the info
file as mask to filter gt_boxes and gt_names. Defaults to False.
"""
NameMapping = {
"movable_object.barrier": "barrier",
"vehicle.bicycle": "bicycle",
"vehicle.bus.bendy": "bus",
"vehicle.bus.rigid": "bus",
"vehicle.car": "car",
"vehicle.construction": "construction_vehicle",
"vehicle.motorcycle": "motorcycle",
"human.pedestrian.adult": "pedestrian",
"human.pedestrian.child": "pedestrian",
"human.pedestrian.construction_worker": "pedestrian",
"human.pedestrian.police_officer": "pedestrian",
"movable_object.trafficcone": "traffic_cone",
"vehicle.trailer": "trailer",
"vehicle.truck": "truck",
}
DefaultAttribute = {
"car": "vehicle.parked",
"pedestrian": "pedestrian.moving",
"trailer": "vehicle.parked",
"truck": "vehicle.parked",
"bus": "vehicle.moving",
"motorcycle": "cycle.without_rider",
"construction_vehicle": "vehicle.parked",
"bicycle": "cycle.without_rider",
"barrier": "",
"traffic_cone": "",
}
AttrMapping = {
"cycle.with_rider": 0,
"cycle.without_rider": 1,
"pedestrian.moving": 2,
"pedestrian.standing": 3,
"pedestrian.sitting_lying_down": 4,
"vehicle.moving": 5,
"vehicle.parked": 6,
"vehicle.stopped": 7,
}
AttrMapping_rev = [
"cycle.with_rider",
"cycle.without_rider",
"pedestrian.moving",
"pedestrian.standing",
"pedestrian.sitting_lying_down",
"vehicle.moving",
"vehicle.parked",
"vehicle.stopped",
]
# https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa
ErrNameMapping = {
"trans_err": "mATE",
"scale_err": "mASE",
"orient_err": "mAOE",
"vel_err": "mAVE",
"attr_err": "mAAE",
}
CLASSES = (
"car",
"truck",
"trailer",
"bus",
"construction_vehicle",
"bicycle",
"motorcycle",
"pedestrian",
"traffic_cone",
"barrier",
)
def __init__(
self,
ann_file,
pipeline=None,
dataset_root=None,
object_classes=None,
map_classes=None,
load_interval=1,
with_velocity=True,
modality=None,
box_type_3d="LiDAR",
filter_empty_gt=True,
test_mode=False,
eval_version="detection_cvpr_2019",
use_valid_flag=False,
) -> None:
self.load_interval = load_interval
self.use_valid_flag = use_valid_flag
super().__init__(
dataset_root=dataset_root,
ann_file=ann_file,
pipeline=pipeline,
classes=object_classes,
modality=modality,
box_type_3d=box_type_3d,
filter_empty_gt=filter_empty_gt,
test_mode=test_mode,
)
self.map_classes = map_classes
self.with_velocity = with_velocity
self.eval_version = eval_version
from nuscenes.eval.detection.config import config_factory
self.eval_detection_configs = config_factory(self.eval_version)
if self.modality is None:
self.modality = dict(
use_camera=False,
use_lidar=True,
use_radar=False,
use_map=False,
use_external=False,
)
def get_cat_ids(self, idx):
"""Get category distribution of single scene.
Args:
idx (int): Index of the data_info.
Returns:
dict[list]: for each category, if the current scene
contains such boxes, store a list containing idx,
otherwise, store empty list.
"""
info = self.data_infos[idx]
if self.use_valid_flag:
mask = info["valid_flag"]
gt_names = set(info["gt_names"][mask])
else:
gt_names = set(info["gt_names"])
cat_ids = []
for name in gt_names:
if name in self.CLASSES:
cat_ids.append(self.cat2id[name])
return cat_ids
def load_annotations(self, ann_file):
"""Load annotations from ann_file.
Args:
ann_file (str): Path of the annotation file.
Returns:
list[dict]: List of annotations sorted by timestamps.
"""
data = mmcv.load(ann_file)
data_infos = list(sorted(data["infos"], key=lambda e: e["timestamp"]))
data_infos = data_infos[:: self.load_interval]
self.metadata = data["metadata"]
self.version = self.metadata["version"]
return data_infos
def get_data_info(self, index: int) -> Dict[str, Any]:
info = self.data_infos[index]
data = dict(
token=info["token"],
lidar_path=info["lidar_path"],
sweeps=info["sweeps"],
timestamp=info["timestamp"],
location=info["location"],
)
# ego to global transform
ego2global = np.eye(4).astype(np.float32)
ego2global[:3, :3] = Quaternion(info["ego2global_rotation"]).rotation_matrix
ego2global[:3, 3] = info["ego2global_translation"]
data["ego2global"] = ego2global
# lidar to ego transform
lidar2ego = np.eye(4).astype(np.float32)
lidar2ego[:3, :3] = Quaternion(info["lidar2ego_rotation"]).rotation_matrix
lidar2ego[:3, 3] = info["lidar2ego_translation"]
data["lidar2ego"] = lidar2ego
if self.modality["use_camera"]:
data["image_paths"] = []
data["lidar2camera"] = []
data["lidar2image"] = []
data["camera2ego"] = []
data["camera_intrinsics"] = []
data["camera2lidar"] = []
for _, camera_info in info["cams"].items():
data["image_paths"].append(camera_info["data_path"])
# lidar to camera transform
lidar2camera_r = np.linalg.inv(camera_info["sensor2lidar_rotation"])
lidar2camera_t = (
camera_info["sensor2lidar_translation"] @ lidar2camera_r.T
)
lidar2camera_rt = np.eye(4).astype(np.float32)
lidar2camera_rt[:3, :3] = lidar2camera_r.T
lidar2camera_rt[3, :3] = -lidar2camera_t
data["lidar2camera"].append(lidar2camera_rt.T)
# camera intrinsics
camera_intrinsics = np.eye(4).astype(np.float32)
camera_intrinsics[:3, :3] = camera_info["camera_intrinsics"]
data["camera_intrinsics"].append(camera_intrinsics)
# lidar to image transform
lidar2image = camera_intrinsics @ lidar2camera_rt.T
data["lidar2image"].append(lidar2image)
# camera to ego transform
camera2ego = np.eye(4).astype(np.float32)
camera2ego[:3, :3] = Quaternion(
camera_info["sensor2ego_rotation"]
).rotation_matrix
camera2ego[:3, 3] = camera_info["sensor2ego_translation"]
data["camera2ego"].append(camera2ego)
# camera to lidar transform
camera2lidar = np.eye(4).astype(np.float32)
camera2lidar[:3, :3] = camera_info["sensor2lidar_rotation"]
camera2lidar[:3, 3] = camera_info["sensor2lidar_translation"]
data["camera2lidar"].append(camera2lidar)
annos = self.get_ann_info(index)
data["ann_info"] = annos
return data
def get_ann_info(self, index):
"""Get annotation info according to the given index.
Args:
index (int): Index of the annotation data to get.
Returns:
dict: Annotation information consists of the following keys:
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): \
3D ground truth bboxes
- gt_labels_3d (np.ndarray): Labels of ground truths.
- gt_names (list[str]): Class names of ground truths.
"""
info = self.data_infos[index]
# filter out bbox containing no points
if self.use_valid_flag:
mask = info["valid_flag"]
else:
mask = info["num_lidar_pts"] > 0
gt_bboxes_3d = info["gt_boxes"][mask]
gt_names_3d = info["gt_names"][mask]
gt_labels_3d = []
for cat in gt_names_3d:
if cat in self.CLASSES:
gt_labels_3d.append(self.CLASSES.index(cat))
else:
gt_labels_3d.append(-1)
gt_labels_3d = np.array(gt_labels_3d)
if self.with_velocity:
gt_velocity = info["gt_velocity"][mask]
nan_mask = np.isnan(gt_velocity[:, 0])
gt_velocity[nan_mask] = [0.0, 0.0]
gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1)
# the nuscenes box center is [0.5, 0.5, 0.5], we change it to be
# the same as KITTI (0.5, 0.5, 0)
# haotian: this is an important change: from 0.5, 0.5, 0.5 -> 0.5, 0.5, 0
gt_bboxes_3d = LiDARInstance3DBoxes(
gt_bboxes_3d, box_dim=gt_bboxes_3d.shape[-1], origin=(0.5, 0.5, 0)
).convert_to(self.box_mode_3d)
anns_results = dict(
gt_bboxes_3d=gt_bboxes_3d,
gt_labels_3d=gt_labels_3d,
gt_names=gt_names_3d,
)
return anns_results
def _format_bbox(self, results, jsonfile_prefix=None):
"""Convert the results to the standard format.
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str): The prefix of the output jsonfile.
You can specify the output directory/filename by
modifying the jsonfile_prefix. Default: None.
Returns:
str: Path of the output json file.
"""
nusc_annos = {}
mapped_class_names = self.CLASSES
print("Start to convert detection format...")
for sample_id, det in enumerate(mmcv.track_iter_progress(results)):
annos = []
boxes = output_to_nusc_box(det)
sample_token = self.data_infos[sample_id]["token"]
boxes = lidar_nusc_box_to_global(
self.data_infos[sample_id],
boxes,
mapped_class_names,
self.eval_detection_configs,
self.eval_version,
)
for i, box in enumerate(boxes):
name = mapped_class_names[box.label]
if np.sqrt(box.velocity[0] ** 2 + box.velocity[1] ** 2) > 0.2:
if name in [
"car",
"construction_vehicle",
"bus",
"truck",
"trailer",
]:
attr = "vehicle.moving"
elif name in ["bicycle", "motorcycle"]:
attr = "cycle.with_rider"
else:
attr = NuScenesDataset.DefaultAttribute[name]
else:
if name in ["pedestrian"]:
attr = "pedestrian.standing"
elif name in ["bus"]:
attr = "vehicle.stopped"
else:
attr = NuScenesDataset.DefaultAttribute[name]
nusc_anno = dict(
sample_token=sample_token,
translation=box.center.tolist(),
size=box.wlh.tolist(),
rotation=box.orientation.elements.tolist(),
velocity=box.velocity[:2].tolist(),
detection_name=name,
detection_score=box.score,
attribute_name=attr,
)
annos.append(nusc_anno)
nusc_annos[sample_token] = annos
nusc_submissions = {
"meta": self.modality,
"results": nusc_annos,
}
mmcv.mkdir_or_exist(jsonfile_prefix)
res_path = osp.join(jsonfile_prefix, "results_nusc.json")
print("Results writes to", res_path)
mmcv.dump(nusc_submissions, res_path)
return res_path
def _evaluate_single(
self,
result_path,
logger=None,
metric="bbox",
result_name="pts_bbox",
):
"""Evaluation for a single model in nuScenes protocol.
Args:
result_path (str): Path of the result file.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
metric (str): Metric name used for evaluation. Default: 'bbox'.
result_name (str): Result name in the metric prefix.
Default: 'pts_bbox'.
Returns:
dict: Dictionary of evaluation details.
"""
from nuscenes import NuScenes
from nuscenes.eval.detection.evaluate import DetectionEval
output_dir = osp.join(*osp.split(result_path)[:-1])
nusc = NuScenes(version=self.version, dataroot=self.dataset_root, verbose=False)
eval_set_map = {
"v1.0-mini": "mini_val",
"v1.0-trainval": "val",
}
nusc_eval = DetectionEval(
nusc,
config=self.eval_detection_configs,
result_path=result_path,
eval_set=eval_set_map[self.version],
output_dir=output_dir,
verbose=False,
)
nusc_eval.main(render_curves=False)
# record metrics
metrics = mmcv.load(osp.join(output_dir, "metrics_summary.json"))
detail = dict()
for name in self.CLASSES:
for k, v in metrics["label_aps"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_ap_dist_{}".format(name, k)] = val
for k, v in metrics["label_tp_errors"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_{}".format(name, k)] = val
for k, v in metrics["tp_errors"].items():
val = float("{:.4f}".format(v))
detail["object/{}".format(self.ErrNameMapping[k])] = val
detail["object/nds"] = metrics["nd_score"]
detail["object/map"] = metrics["mean_ap"]
return detail
def format_results(self, results, jsonfile_prefix=None):
"""Format the results to json (standard format for COCO evaluation).
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: Returns (result_files, tmp_dir), where `result_files` is a \
dict containing the json filepaths, `tmp_dir` is the temporal \
directory created for saving json files when \
`jsonfile_prefix` is not specified.
"""
assert isinstance(results, list), "results must be a list"
assert len(results) == len(
self
), "The length of results is not equal to the dataset len: {} != {}".format(
len(results), len(self)
)
if jsonfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
jsonfile_prefix = osp.join(tmp_dir.name, "results")
else:
tmp_dir = None
result_files = self._format_bbox(results, jsonfile_prefix)
return result_files, tmp_dir
def evaluate_map(self, results):
thresholds = torch.tensor([0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65])
num_classes = len(self.map_classes)
num_thresholds = len(thresholds)
tp = torch.zeros(num_classes, num_thresholds)
fp = torch.zeros(num_classes, num_thresholds)
fn = torch.zeros(num_classes, num_thresholds)
for result in results:
pred = result["masks_bev"]
label = result["gt_masks_bev"]
pred = pred.detach().reshape(num_classes, -1)
label = label.detach().bool().reshape(num_classes, -1)
pred = pred[:, :, None] >= thresholds
label = label[:, :, None]
tp += (pred & label).sum(dim=1)
fp += (pred & ~label).sum(dim=1)
fn += (~pred & label).sum(dim=1)
ious = tp / (tp + fp + fn + 1e-7)
metrics = {}
for index, name in enumerate(self.map_classes):
metrics[f"map/{name}/iou@max"] = ious[index].max().item()
for threshold, iou in zip(thresholds, ious[index]):
metrics[f"map/{name}/iou@{threshold.item():.2f}"] = iou.item()
metrics["map/mean/iou@max"] = ious.max(dim=1).values.mean().item()
return metrics
def evaluate(
self,
results,
metric="bbox",
jsonfile_prefix=None,
result_names=["pts_bbox"],
**kwargs,
):
"""Evaluation in nuScenes protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
dict[str, float]: Results of each evaluation metric.
"""
metrics = {}
if "masks_bev" in results[0]:
metrics.update(self.evaluate_map(results))
if "boxes_3d" in results[0]:
result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
if isinstance(result_files, dict):
for name in result_names:
print("Evaluating bboxes of {}".format(name))
ret_dict = self._evaluate_single(result_files[name])
metrics.update(ret_dict)
elif isinstance(result_files, str):
metrics.update(self._evaluate_single(result_files))
if tmp_dir is not None:
tmp_dir.cleanup()
return metrics
def output_to_nusc_box(detection):
"""Convert the output to the box class in the nuScenes.
Args:
detection (dict): Detection results.
- boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox.
- scores_3d (torch.Tensor): Detection scores.
- labels_3d (torch.Tensor): Predicted box labels.
Returns:
list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes.
"""
box3d = detection["boxes_3d"]
scores = detection["scores_3d"].numpy()
labels = detection["labels_3d"].numpy()
box_gravity_center = box3d.gravity_center.numpy()
box_dims = box3d.dims.numpy()
box_yaw = box3d.yaw.numpy()
# TODO: check whether this is necessary
# with dir_offset & dir_limit in the head
box_yaw = -box_yaw - np.pi / 2
box_list = []
for i in range(len(box3d)):
quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i])
velocity = (*box3d.tensor[i, 7:9], 0.0)
# velo_val = np.linalg.norm(box3d[i, 7:9])
# velo_ori = box3d[i, 6]
# velocity = (
# velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0)
box = NuScenesBox(
box_gravity_center[i],
box_dims[i],
quat,
label=labels[i],
score=scores[i],
velocity=velocity,
)
box_list.append(box)
return box_list
def lidar_nusc_box_to_global(
info, boxes, classes, eval_configs, eval_version="detection_cvpr_2019"
):
"""Convert the box from ego to global coordinate.
Args:
info (dict): Info for a specific sample data, including the
calibration information.
boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes.
classes (list[str]): Mapped classes in the evaluation.
eval_configs : Evaluation configuration object.
eval_version (str): Evaluation version.
Default: 'detection_cvpr_2019'
Returns:
list: List of standard NuScenesBoxes in the global
coordinate.
"""
box_list = []
for box in boxes:
# Move box to ego vehicle coord system
box.rotate(pyquaternion.Quaternion(info["lidar2ego_rotation"]))
box.translate(np.array(info["lidar2ego_translation"]))
# filter det in ego.
cls_range_map = eval_configs.class_range
radius = np.linalg.norm(box.center[:2], 2)
det_range = cls_range_map[classes[box.label]]
if radius > det_range:
continue
# Move box to global coord system
box.rotate(pyquaternion.Quaternion(info["ego2global_rotation"]))
box.translate(np.array(info["ego2global_translation"]))
box_list.append(box)
return box_list
| import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from .custom_3d import Custom3DDataset
@DATASETS.register_module()
class NuScenesDataset(Custom3DDataset):
r"""NuScenes Dataset.
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `NuScenes Dataset <https://www.nuscenes.org/download>`_
for data downloading.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
dataset_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
with_velocity (bool, optional): Whether include velocity prediction
into the experiments. Defaults to True.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes.
- 'LiDAR': Box in LiDAR coordinates.
- 'Depth': Box in depth coordinates, usually for indoor dataset.
- 'Camera': Box in camera coordinates.
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
eval_version (bool, optional): Configuration version of evaluation.
Defaults to 'detection_cvpr_2019'.
use_valid_flag (bool): Whether to use `use_valid_flag` key in the info
file as mask to filter gt_boxes and gt_names. Defaults to False.
"""
NameMapping = {
"movable_object.barrier": "barrier",
"vehicle.bicycle": "bicycle",
"vehicle.bus.bendy": "bus",
"vehicle.bus.rigid": "bus",
"vehicle.car": "car",
"vehicle.construction": "construction_vehicle",
"vehicle.motorcycle": "motorcycle",
"human.pedestrian.adult": "pedestrian",
"human.pedestrian.child": "pedestrian",
"human.pedestrian.construction_worker": "pedestrian",
"human.pedestrian.police_officer": "pedestrian",
"movable_object.trafficcone": "traffic_cone",
"vehicle.trailer": "trailer",
"vehicle.truck": "truck",
}
DefaultAttribute = {
"car": "vehicle.parked",
"pedestrian": "pedestrian.moving",
"trailer": "vehicle.parked",
"truck": "vehicle.parked",
"bus": "vehicle.moving",
"motorcycle": "cycle.without_rider",
"construction_vehicle": "vehicle.parked",
"bicycle": "cycle.without_rider",
"barrier": "",
"traffic_cone": "",
}
AttrMapping = {
"cycle.with_rider": 0,
"cycle.without_rider": 1,
"pedestrian.moving": 2,
"pedestrian.standing": 3,
"pedestrian.sitting_lying_down": 4,
"vehicle.moving": 5,
"vehicle.parked": 6,
"vehicle.stopped": 7,
}
AttrMapping_rev = [
"cycle.with_rider",
"cycle.without_rider",
"pedestrian.moving",
"pedestrian.standing",
"pedestrian.sitting_lying_down",
"vehicle.moving",
"vehicle.parked",
"vehicle.stopped",
]
# https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa
ErrNameMapping = {
"trans_err": "mATE",
"scale_err": "mASE",
"orient_err": "mAOE",
"vel_err": "mAVE",
"attr_err": "mAAE",
}
CLASSES = (
"car",
"truck",
"trailer",
"bus",
"construction_vehicle",
"bicycle",
"motorcycle",
"pedestrian",
"traffic_cone",
"barrier",
)
def __init__(
self,
ann_file,
pipeline=None,
dataset_root=None,
object_classes=None,
map_classes=None,
load_interval=1,
with_velocity=True,
modality=None,
box_type_3d="LiDAR",
filter_empty_gt=True,
test_mode=False,
eval_version="detection_cvpr_2019",
use_valid_flag=False,
) -> None:
self.load_interval = load_interval
self.use_valid_flag = use_valid_flag
super().__init__(
dataset_root=dataset_root,
ann_file=ann_file,
pipeline=pipeline,
classes=object_classes,
modality=modality,
box_type_3d=box_type_3d,
filter_empty_gt=filter_empty_gt,
test_mode=test_mode,
)
self.map_classes = map_classes
self.with_velocity = with_velocity
self.eval_version = eval_version
from nuscenes.eval.detection.config import config_factory
self.eval_detection_configs = config_factory(self.eval_version)
if self.modality is None:
self.modality = dict(
use_camera=False,
use_lidar=True,
use_radar=False,
use_map=False,
use_external=False,
)
def get_cat_ids(self, idx):
"""Get category distribution of single scene.
Args:
idx (int): Index of the data_info.
Returns:
dict[list]: for each category, if the current scene
contains such boxes, store a list containing idx,
otherwise, store empty list.
"""
info = self.data_infos[idx]
if self.use_valid_flag:
mask = info["valid_flag"]
gt_names = set(info["gt_names"][mask])
else:
gt_names = set(info["gt_names"])
cat_ids = []
for name in gt_names:
if name in self.CLASSES:
cat_ids.append(self.cat2id[name])
return cat_ids
def load_annotations(self, ann_file):
"""Load annotations from ann_file.
Args:
ann_file (str): Path of the annotation file.
Returns:
list[dict]: List of annotations sorted by timestamps.
"""
data = mmcv.load(ann_file)
data_infos = list(sorted(data["infos"], key=lambda e: e["timestamp"]))
data_infos = data_infos[:: self.load_interval]
self.metadata = data["metadata"]
self.version = self.metadata["version"]
return data_infos
def get_data_info(self, index: int) -> Dict[str, Any]:
info = self.data_infos[index]
data = dict(
token=info["token"],
sample_idx=info['token'],
lidar_path=info["lidar_path"],
sweeps=info["sweeps"],
timestamp=info["timestamp"],
location=info["location"],
)
# ego to global transform
ego2global = np.eye(4).astype(np.float32)
ego2global[:3, :3] = Quaternion(info["ego2global_rotation"]).rotation_matrix
ego2global[:3, 3] = info["ego2global_translation"]
data["ego2global"] = ego2global
# lidar to ego transform
lidar2ego = np.eye(4).astype(np.float32)
lidar2ego[:3, :3] = Quaternion(info["lidar2ego_rotation"]).rotation_matrix
lidar2ego[:3, 3] = info["lidar2ego_translation"]
data["lidar2ego"] = lidar2ego
if self.modality["use_camera"]:
data["image_paths"] = []
data["lidar2camera"] = []
data["lidar2image"] = []
data["camera2ego"] = []
data["camera_intrinsics"] = []
data["camera2lidar"] = []
for _, camera_info in info["cams"].items():
data["image_paths"].append(camera_info["data_path"])
# lidar to camera transform
lidar2camera_r = np.linalg.inv(camera_info["sensor2lidar_rotation"])
lidar2camera_t = (
camera_info["sensor2lidar_translation"] @ lidar2camera_r.T
)
lidar2camera_rt = np.eye(4).astype(np.float32)
lidar2camera_rt[:3, :3] = lidar2camera_r.T
lidar2camera_rt[3, :3] = -lidar2camera_t
data["lidar2camera"].append(lidar2camera_rt.T)
# camera intrinsics
camera_intrinsics = np.eye(4).astype(np.float32)
camera_intrinsics[:3, :3] = camera_info["camera_intrinsics"]
data["camera_intrinsics"].append(camera_intrinsics)
# lidar to image transform
lidar2image = camera_intrinsics @ lidar2camera_rt.T
data["lidar2image"].append(lidar2image)
# camera to ego transform
camera2ego = np.eye(4).astype(np.float32)
camera2ego[:3, :3] = Quaternion(
camera_info["sensor2ego_rotation"]
).rotation_matrix
camera2ego[:3, 3] = camera_info["sensor2ego_translation"]
data["camera2ego"].append(camera2ego)
# camera to lidar transform
camera2lidar = np.eye(4).astype(np.float32)
camera2lidar[:3, :3] = camera_info["sensor2lidar_rotation"]
camera2lidar[:3, 3] = camera_info["sensor2lidar_translation"]
data["camera2lidar"].append(camera2lidar)
annos = self.get_ann_info(index)
data["ann_info"] = annos
return data
def get_ann_info(self, index):
"""Get annotation info according to the given index.
Args:
index (int): Index of the annotation data to get.
Returns:
dict: Annotation information consists of the following keys:
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): \
3D ground truth bboxes
- gt_labels_3d (np.ndarray): Labels of ground truths.
- gt_names (list[str]): Class names of ground truths.
"""
info = self.data_infos[index]
# filter out bbox containing no points
if self.use_valid_flag:
mask = info["valid_flag"]
else:
mask = info["num_lidar_pts"] > 0
gt_bboxes_3d = info["gt_boxes"][mask]
gt_names_3d = info["gt_names"][mask]
gt_labels_3d = []
for cat in gt_names_3d:
if cat in self.CLASSES:
gt_labels_3d.append(self.CLASSES.index(cat))
else:
gt_labels_3d.append(-1)
gt_labels_3d = np.array(gt_labels_3d)
if self.with_velocity:
gt_velocity = info["gt_velocity"][mask]
nan_mask = np.isnan(gt_velocity[:, 0])
gt_velocity[nan_mask] = [0.0, 0.0]
gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1)
# the nuscenes box center is [0.5, 0.5, 0.5], we change it to be
# the same as KITTI (0.5, 0.5, 0)
# haotian: this is an important change: from 0.5, 0.5, 0.5 -> 0.5, 0.5, 0
gt_bboxes_3d = LiDARInstance3DBoxes(
gt_bboxes_3d, box_dim=gt_bboxes_3d.shape[-1], origin=(0.5, 0.5, 0)
).convert_to(self.box_mode_3d)
anns_results = dict(
gt_bboxes_3d=gt_bboxes_3d,
gt_labels_3d=gt_labels_3d,
gt_names=gt_names_3d,
)
return anns_results
def _format_bbox(self, results, jsonfile_prefix=None):
"""Convert the results to the standard format.
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str): The prefix of the output jsonfile.
You can specify the output directory/filename by
modifying the jsonfile_prefix. Default: None.
Returns:
str: Path of the output json file.
"""
nusc_annos = {}
mapped_class_names = self.CLASSES
print("Start to convert detection format...")
for sample_id, det in enumerate(mmcv.track_iter_progress(results)):
annos = []
boxes = output_to_nusc_box(det)
sample_token = self.data_infos[sample_id]["token"]
boxes = lidar_nusc_box_to_global(
self.data_infos[sample_id],
boxes,
mapped_class_names,
self.eval_detection_configs,
self.eval_version,
)
for i, box in enumerate(boxes):
name = mapped_class_names[box.label]
if np.sqrt(box.velocity[0] ** 2 + box.velocity[1] ** 2) > 0.2:
if name in [
"car",
"construction_vehicle",
"bus",
"truck",
"trailer",
]:
attr = "vehicle.moving"
elif name in ["bicycle", "motorcycle"]:
attr = "cycle.with_rider"
else:
attr = NuScenesDataset.DefaultAttribute[name]
else:
if name in ["pedestrian"]:
attr = "pedestrian.standing"
elif name in ["bus"]:
attr = "vehicle.stopped"
else:
attr = NuScenesDataset.DefaultAttribute[name]
nusc_anno = dict(
sample_token=sample_token,
translation=box.center.tolist(),
size=box.wlh.tolist(),
rotation=box.orientation.elements.tolist(),
velocity=box.velocity[:2].tolist(),
detection_name=name,
detection_score=box.score,
attribute_name=attr,
)
annos.append(nusc_anno)
nusc_annos[sample_token] = annos
nusc_submissions = {
"meta": self.modality,
"results": nusc_annos,
}
mmcv.mkdir_or_exist(jsonfile_prefix)
res_path = osp.join(jsonfile_prefix, "results_nusc.json")
print("Results writes to", res_path)
mmcv.dump(nusc_submissions, res_path)
return res_path
def _evaluate_single(
self,
result_path,
logger=None,
metric="bbox",
result_name="pts_bbox",
):
"""Evaluation for a single model in nuScenes protocol.
Args:
result_path (str): Path of the result file.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
metric (str): Metric name used for evaluation. Default: 'bbox'.
result_name (str): Result name in the metric prefix.
Default: 'pts_bbox'.
Returns:
dict: Dictionary of evaluation details.
"""
from nuscenes import NuScenes
from nuscenes.eval.detection.evaluate import DetectionEval
output_dir = osp.join(*osp.split(result_path)[:-1])
nusc = NuScenes(version=self.version, dataroot=self.dataset_root, verbose=False)
eval_set_map = {
"v1.0-mini": "mini_val",
"v1.0-trainval": "val",
}
nusc_eval = DetectionEval(
nusc,
config=self.eval_detection_configs,
result_path=result_path,
eval_set=eval_set_map[self.version],
output_dir=output_dir,
verbose=False,
)
nusc_eval.main(render_curves=False)
# record metrics
metrics = mmcv.load(osp.join(output_dir, "metrics_summary.json"))
detail = dict()
for name in self.CLASSES:
for k, v in metrics["label_aps"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_ap_dist_{}".format(name, k)] = val
for k, v in metrics["label_tp_errors"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_{}".format(name, k)] = val
for k, v in metrics["tp_errors"].items():
val = float("{:.4f}".format(v))
detail["object/{}".format(self.ErrNameMapping[k])] = val
detail["object/nds"] = metrics["nd_score"]
detail["object/map"] = metrics["mean_ap"]
return detail
def format_results(self, results, jsonfile_prefix=None):
"""Format the results to json (standard format for COCO evaluation).
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: Returns (result_files, tmp_dir), where `result_files` is a \
dict containing the json filepaths, `tmp_dir` is the temporal \
directory created for saving json files when \
`jsonfile_prefix` is not specified.
"""
assert isinstance(results, list), "results must be a list"
assert len(results) == len(
self
), "The length of results is not equal to the dataset len: {} != {}".format(
len(results), len(self)
)
if jsonfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
jsonfile_prefix = osp.join(tmp_dir.name, "results")
else:
tmp_dir = None
result_files = self._format_bbox(results, jsonfile_prefix)
return result_files, tmp_dir
def evaluate_map(self, results):
thresholds = torch.tensor([0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65])
num_classes = len(self.map_classes)
num_thresholds = len(thresholds)
tp = torch.zeros(num_classes, num_thresholds)
fp = torch.zeros(num_classes, num_thresholds)
fn = torch.zeros(num_classes, num_thresholds)
for result in results:
pred = result["masks_bev"]
label = result["gt_masks_bev"]
pred = pred.detach().reshape(num_classes, -1)
label = label.detach().bool().reshape(num_classes, -1)
pred = pred[:, :, None] >= thresholds
label = label[:, :, None]
tp += (pred & label).sum(dim=1)
fp += (pred & ~label).sum(dim=1)
fn += (~pred & label).sum(dim=1)
ious = tp / (tp + fp + fn + 1e-7)
metrics = {}
for index, name in enumerate(self.map_classes):
metrics[f"map/{name}/iou@max"] = ious[index].max().item()
for threshold, iou in zip(thresholds, ious[index]):
metrics[f"map/{name}/iou@{threshold.item():.2f}"] = iou.item()
metrics["map/mean/iou@max"] = ious.max(dim=1).values.mean().item()
return metrics
def evaluate(
self,
results,
metric="bbox",
jsonfile_prefix=None,
result_names=["pts_bbox"],
**kwargs,
):
"""Evaluation in nuScenes protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
dict[str, float]: Results of each evaluation metric.
"""
metrics = {}
if "masks_bev" in results[0]:
metrics.update(self.evaluate_map(results))
if "boxes_3d" in results[0]:
result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
if isinstance(result_files, dict):
for name in result_names:
print("Evaluating bboxes of {}".format(name))
ret_dict = self._evaluate_single(result_files[name])
metrics.update(ret_dict)
elif isinstance(result_files, str):
metrics.update(self._evaluate_single(result_files))
if tmp_dir is not None:
tmp_dir.cleanup()
return metrics
def output_to_nusc_box(detection):
"""Convert the output to the box class in the nuScenes.
Args:
detection (dict): Detection results.
- boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox.
- scores_3d (torch.Tensor): Detection scores.
- labels_3d (torch.Tensor): Predicted box labels.
Returns:
list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes.
"""
box3d = detection["boxes_3d"]
scores = detection["scores_3d"].numpy()
labels = detection["labels_3d"].numpy()
box_gravity_center = box3d.gravity_center.numpy()
box_dims = box3d.dims.numpy()
box_yaw = box3d.yaw.numpy()
# TODO: check whether this is necessary
# with dir_offset & dir_limit in the head
box_yaw = -box_yaw - np.pi / 2
box_list = []
for i in range(len(box3d)):
quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i])
velocity = (*box3d.tensor[i, 7:9], 0.0)
# velo_val = np.linalg.norm(box3d[i, 7:9])
# velo_ori = box3d[i, 6]
# velocity = (
# velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0)
box = NuScenesBox(
box_gravity_center[i],
box_dims[i],
quat,
label=labels[i],
score=scores[i],
velocity=velocity,
)
box_list.append(box)
return box_list
def lidar_nusc_box_to_global(
info, boxes, classes, eval_configs, eval_version="detection_cvpr_2019"
):
"""Convert the box from ego to global coordinate.
Args:
info (dict): Info for a specific sample data, including the
calibration information.
boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes.
classes (list[str]): Mapped classes in the evaluation.
eval_configs : Evaluation configuration object.
eval_version (str): Evaluation version.
Default: 'detection_cvpr_2019'
Returns:
list: List of standard NuScenesBoxes in the global
coordinate.
"""
box_list = []
for box in boxes:
# Move box to ego vehicle coord system
box.rotate(pyquaternion.Quaternion(info["lidar2ego_rotation"]))
box.translate(np.array(info["lidar2ego_translation"]))
# filter det in ego.
cls_range_map = eval_configs.class_range
radius = np.linalg.norm(box.center[:2], 2)
det_range = cls_range_map[classes[box.label]]
if radius > det_range:
continue
# Move box to global coord system
box.rotate(pyquaternion.Quaternion(info["ego2global_rotation"]))
box.translate(np.array(info["ego2global_translation"]))
box_list.append(box)
return box_list
| kevincao91 | 2bf96604feab90edd18591a43bee1b9c41c26002 | 0e5b9edbc135bf297f6e3323249f7165b232c925 | By this method I solved image_idx = example["sample_idx"]
KeyError: 'sample_idx' problem | kkangshen | 0 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/datasets/nuscenes_dataset.py | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from .custom_3d import Custom3DDataset
@DATASETS.register_module()
class NuScenesDataset(Custom3DDataset):
r"""NuScenes Dataset.
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `NuScenes Dataset <https://www.nuscenes.org/download>`_
for data downloading.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
dataset_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
with_velocity (bool, optional): Whether include velocity prediction
into the experiments. Defaults to True.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes.
- 'LiDAR': Box in LiDAR coordinates.
- 'Depth': Box in depth coordinates, usually for indoor dataset.
- 'Camera': Box in camera coordinates.
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
eval_version (bool, optional): Configuration version of evaluation.
Defaults to 'detection_cvpr_2019'.
use_valid_flag (bool): Whether to use `use_valid_flag` key in the info
file as mask to filter gt_boxes and gt_names. Defaults to False.
"""
NameMapping = {
"movable_object.barrier": "barrier",
"vehicle.bicycle": "bicycle",
"vehicle.bus.bendy": "bus",
"vehicle.bus.rigid": "bus",
"vehicle.car": "car",
"vehicle.construction": "construction_vehicle",
"vehicle.motorcycle": "motorcycle",
"human.pedestrian.adult": "pedestrian",
"human.pedestrian.child": "pedestrian",
"human.pedestrian.construction_worker": "pedestrian",
"human.pedestrian.police_officer": "pedestrian",
"movable_object.trafficcone": "traffic_cone",
"vehicle.trailer": "trailer",
"vehicle.truck": "truck",
}
DefaultAttribute = {
"car": "vehicle.parked",
"pedestrian": "pedestrian.moving",
"trailer": "vehicle.parked",
"truck": "vehicle.parked",
"bus": "vehicle.moving",
"motorcycle": "cycle.without_rider",
"construction_vehicle": "vehicle.parked",
"bicycle": "cycle.without_rider",
"barrier": "",
"traffic_cone": "",
}
AttrMapping = {
"cycle.with_rider": 0,
"cycle.without_rider": 1,
"pedestrian.moving": 2,
"pedestrian.standing": 3,
"pedestrian.sitting_lying_down": 4,
"vehicle.moving": 5,
"vehicle.parked": 6,
"vehicle.stopped": 7,
}
AttrMapping_rev = [
"cycle.with_rider",
"cycle.without_rider",
"pedestrian.moving",
"pedestrian.standing",
"pedestrian.sitting_lying_down",
"vehicle.moving",
"vehicle.parked",
"vehicle.stopped",
]
# https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa
ErrNameMapping = {
"trans_err": "mATE",
"scale_err": "mASE",
"orient_err": "mAOE",
"vel_err": "mAVE",
"attr_err": "mAAE",
}
CLASSES = (
"car",
"truck",
"trailer",
"bus",
"construction_vehicle",
"bicycle",
"motorcycle",
"pedestrian",
"traffic_cone",
"barrier",
)
def __init__(
self,
ann_file,
pipeline=None,
dataset_root=None,
object_classes=None,
map_classes=None,
load_interval=1,
with_velocity=True,
modality=None,
box_type_3d="LiDAR",
filter_empty_gt=True,
test_mode=False,
eval_version="detection_cvpr_2019",
use_valid_flag=False,
) -> None:
self.load_interval = load_interval
self.use_valid_flag = use_valid_flag
super().__init__(
dataset_root=dataset_root,
ann_file=ann_file,
pipeline=pipeline,
classes=object_classes,
modality=modality,
box_type_3d=box_type_3d,
filter_empty_gt=filter_empty_gt,
test_mode=test_mode,
)
self.map_classes = map_classes
self.with_velocity = with_velocity
self.eval_version = eval_version
from nuscenes.eval.detection.config import config_factory
self.eval_detection_configs = config_factory(self.eval_version)
if self.modality is None:
self.modality = dict(
use_camera=False,
use_lidar=True,
use_radar=False,
use_map=False,
use_external=False,
)
def get_cat_ids(self, idx):
"""Get category distribution of single scene.
Args:
idx (int): Index of the data_info.
Returns:
dict[list]: for each category, if the current scene
contains such boxes, store a list containing idx,
otherwise, store empty list.
"""
info = self.data_infos[idx]
if self.use_valid_flag:
mask = info["valid_flag"]
gt_names = set(info["gt_names"][mask])
else:
gt_names = set(info["gt_names"])
cat_ids = []
for name in gt_names:
if name in self.CLASSES:
cat_ids.append(self.cat2id[name])
return cat_ids
def load_annotations(self, ann_file):
"""Load annotations from ann_file.
Args:
ann_file (str): Path of the annotation file.
Returns:
list[dict]: List of annotations sorted by timestamps.
"""
data = mmcv.load(ann_file)
data_infos = list(sorted(data["infos"], key=lambda e: e["timestamp"]))
data_infos = data_infos[:: self.load_interval]
self.metadata = data["metadata"]
self.version = self.metadata["version"]
return data_infos
def get_data_info(self, index: int) -> Dict[str, Any]:
info = self.data_infos[index]
data = dict(
token=info["token"],
sample_idx=info['token'],
lidar_path=info["lidar_path"],
sweeps=info["sweeps"],
timestamp=info["timestamp"],
location=info["location"],
)
# ego to global transform
ego2global = np.eye(4).astype(np.float32)
ego2global[:3, :3] = Quaternion(info["ego2global_rotation"]).rotation_matrix
ego2global[:3, 3] = info["ego2global_translation"]
data["ego2global"] = ego2global
# lidar to ego transform
lidar2ego = np.eye(4).astype(np.float32)
lidar2ego[:3, :3] = Quaternion(info["lidar2ego_rotation"]).rotation_matrix
lidar2ego[:3, 3] = info["lidar2ego_translation"]
data["lidar2ego"] = lidar2ego
if self.modality["use_camera"]:
data["image_paths"] = []
data["lidar2camera"] = []
data["lidar2image"] = []
data["camera2ego"] = []
data["camera_intrinsics"] = []
for _, camera_info in info["cams"].items():
data["image_paths"].append(camera_info["data_path"])
# lidar to camera transform
lidar2camera_r = np.linalg.inv(camera_info["sensor2lidar_rotation"])
lidar2camera_t = (
camera_info["sensor2lidar_translation"] @ lidar2camera_r.T
)
lidar2camera_rt = np.eye(4).astype(np.float32)
lidar2camera_rt[:3, :3] = lidar2camera_r.T
lidar2camera_rt[3, :3] = -lidar2camera_t
data["lidar2camera"].append(lidar2camera_rt.T)
# camera intrinsics
camera_intrinsics = np.eye(4).astype(np.float32)
camera_intrinsics[:3, :3] = camera_info["camera_intrinsics"]
data["camera_intrinsics"].append(camera_intrinsics)
# lidar to image transform
lidar2image = camera_intrinsics @ lidar2camera_rt.T
data["lidar2image"].append(lidar2image)
# camera to ego transform
camera2ego = np.eye(4).astype(np.float32)
camera2ego[:3, :3] = Quaternion(
camera_info["sensor2ego_rotation"]
).rotation_matrix
camera2ego[:3, 3] = camera_info["sensor2ego_translation"]
data["camera2ego"].append(camera2ego)
# TODO (Haotian): test set submission.
annos = self.get_ann_info(index)
data["ann_info"] = annos
return data
def get_ann_info(self, index):
"""Get annotation info according to the given index.
Args:
index (int): Index of the annotation data to get.
Returns:
dict: Annotation information consists of the following keys:
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): \
3D ground truth bboxes
- gt_labels_3d (np.ndarray): Labels of ground truths.
- gt_names (list[str]): Class names of ground truths.
"""
info = self.data_infos[index]
# filter out bbox containing no points
if self.use_valid_flag:
mask = info["valid_flag"]
else:
mask = info["num_lidar_pts"] > 0
gt_bboxes_3d = info["gt_boxes"][mask]
gt_names_3d = info["gt_names"][mask]
gt_labels_3d = []
for cat in gt_names_3d:
if cat in self.CLASSES:
gt_labels_3d.append(self.CLASSES.index(cat))
else:
gt_labels_3d.append(-1)
gt_labels_3d = np.array(gt_labels_3d)
if self.with_velocity:
gt_velocity = info["gt_velocity"][mask]
nan_mask = np.isnan(gt_velocity[:, 0])
gt_velocity[nan_mask] = [0.0, 0.0]
gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1)
# the nuscenes box center is [0.5, 0.5, 0.5], we change it to be
# the same as KITTI (0.5, 0.5, 0)
# haotian: this is an important change: from 0.5, 0.5, 0.5 -> 0.5, 0.5, 0
gt_bboxes_3d = LiDARInstance3DBoxes(
gt_bboxes_3d, box_dim=gt_bboxes_3d.shape[-1], origin=(0.5, 0.5, 0)
).convert_to(self.box_mode_3d)
anns_results = dict(
gt_bboxes_3d=gt_bboxes_3d,
gt_labels_3d=gt_labels_3d,
gt_names=gt_names_3d,
)
return anns_results
def _format_bbox(self, results, jsonfile_prefix=None):
"""Convert the results to the standard format.
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str): The prefix of the output jsonfile.
You can specify the output directory/filename by
modifying the jsonfile_prefix. Default: None.
Returns:
str: Path of the output json file.
"""
nusc_annos = {}
mapped_class_names = self.CLASSES
print("Start to convert detection format...")
for sample_id, det in enumerate(mmcv.track_iter_progress(results)):
annos = []
boxes = output_to_nusc_box(det)
sample_token = self.data_infos[sample_id]["token"]
boxes = lidar_nusc_box_to_global(
self.data_infos[sample_id],
boxes,
mapped_class_names,
self.eval_detection_configs,
self.eval_version,
)
for i, box in enumerate(boxes):
name = mapped_class_names[box.label]
if np.sqrt(box.velocity[0] ** 2 + box.velocity[1] ** 2) > 0.2:
if name in [
"car",
"construction_vehicle",
"bus",
"truck",
"trailer",
]:
attr = "vehicle.moving"
elif name in ["bicycle", "motorcycle"]:
attr = "cycle.with_rider"
else:
attr = NuScenesDataset.DefaultAttribute[name]
else:
if name in ["pedestrian"]:
attr = "pedestrian.standing"
elif name in ["bus"]:
attr = "vehicle.stopped"
else:
attr = NuScenesDataset.DefaultAttribute[name]
nusc_anno = dict(
sample_token=sample_token,
translation=box.center.tolist(),
size=box.wlh.tolist(),
rotation=box.orientation.elements.tolist(),
velocity=box.velocity[:2].tolist(),
detection_name=name,
detection_score=box.score,
attribute_name=attr,
)
annos.append(nusc_anno)
nusc_annos[sample_token] = annos
nusc_submissions = {
"meta": self.modality,
"results": nusc_annos,
}
mmcv.mkdir_or_exist(jsonfile_prefix)
res_path = osp.join(jsonfile_prefix, "results_nusc.json")
print("Results writes to", res_path)
mmcv.dump(nusc_submissions, res_path)
return res_path
def _evaluate_single(
self,
result_path,
logger=None,
metric="bbox",
result_name="pts_bbox",
):
"""Evaluation for a single model in nuScenes protocol.
Args:
result_path (str): Path of the result file.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
metric (str): Metric name used for evaluation. Default: 'bbox'.
result_name (str): Result name in the metric prefix.
Default: 'pts_bbox'.
Returns:
dict: Dictionary of evaluation details.
"""
from nuscenes import NuScenes
from nuscenes.eval.detection.evaluate import DetectionEval
output_dir = osp.join(*osp.split(result_path)[:-1])
nusc = NuScenes(version=self.version, dataroot=self.dataset_root, verbose=False)
eval_set_map = {
"v1.0-mini": "mini_val",
"v1.0-trainval": "val",
}
nusc_eval = DetectionEval(
nusc,
config=self.eval_detection_configs,
result_path=result_path,
eval_set=eval_set_map[self.version],
output_dir=output_dir,
verbose=False,
)
nusc_eval.main(render_curves=False)
# record metrics
metrics = mmcv.load(osp.join(output_dir, "metrics_summary.json"))
detail = dict()
for name in self.CLASSES:
for k, v in metrics["label_aps"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_ap_dist_{}".format(name, k)] = val
for k, v in metrics["label_tp_errors"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_{}".format(name, k)] = val
for k, v in metrics["tp_errors"].items():
val = float("{:.4f}".format(v))
detail["object/{}".format(self.ErrNameMapping[k])] = val
detail["object/nds"] = metrics["nd_score"]
detail["object/map"] = metrics["mean_ap"]
return detail
def format_results(self, results, jsonfile_prefix=None):
"""Format the results to json (standard format for COCO evaluation).
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: Returns (result_files, tmp_dir), where `result_files` is a \
dict containing the json filepaths, `tmp_dir` is the temporal \
directory created for saving json files when \
`jsonfile_prefix` is not specified.
"""
assert isinstance(results, list), "results must be a list"
assert len(results) == len(
self
), "The length of results is not equal to the dataset len: {} != {}".format(
len(results), len(self)
)
if jsonfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
jsonfile_prefix = osp.join(tmp_dir.name, "results")
else:
tmp_dir = None
result_files = self._format_bbox(results, jsonfile_prefix)
return result_files, tmp_dir
def evaluate_map(self, results):
thresholds = torch.tensor([0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65])
num_classes = len(self.map_classes)
num_thresholds = len(thresholds)
tp = torch.zeros(num_classes, num_thresholds)
fp = torch.zeros(num_classes, num_thresholds)
fn = torch.zeros(num_classes, num_thresholds)
for result in results:
pred = result["masks_bev"]
label = result["gt_masks_bev"]
pred = pred.detach().reshape(num_classes, -1)
label = label.detach().bool().reshape(num_classes, -1)
pred = pred[:, :, None] >= thresholds
label = label[:, :, None]
tp += (pred & label).sum(dim=1)
fp += (pred & ~label).sum(dim=1)
fn += (~pred & label).sum(dim=1)
ious = tp / (tp + fp + fn + 1e-7)
metrics = {}
for index, name in enumerate(self.map_classes):
metrics[f"map/{name}/iou@max"] = ious[index].max().item()
for threshold, iou in zip(thresholds, ious[index]):
metrics[f"map/{name}/iou@{threshold.item():.2f}"] = iou.item()
metrics["map/mean/iou@max"] = ious.max(dim=1).values.mean().item()
return metrics
def evaluate(
self,
results,
metric="bbox",
jsonfile_prefix=None,
result_names=["pts_bbox"],
**kwargs,
):
"""Evaluation in nuScenes protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
dict[str, float]: Results of each evaluation metric.
"""
metrics = {}
if "masks_bev" in results[0]:
metrics.update(self.evaluate_map(results))
if "boxes_3d" in results[0]:
result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
if isinstance(result_files, dict):
for name in result_names:
print("Evaluating bboxes of {}".format(name))
ret_dict = self._evaluate_single(result_files[name])
metrics.update(ret_dict)
elif isinstance(result_files, str):
metrics.update(self._evaluate_single(result_files))
if tmp_dir is not None:
tmp_dir.cleanup()
return metrics
def output_to_nusc_box(detection):
"""Convert the output to the box class in the nuScenes.
Args:
detection (dict): Detection results.
- boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox.
- scores_3d (torch.Tensor): Detection scores.
- labels_3d (torch.Tensor): Predicted box labels.
Returns:
list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes.
"""
box3d = detection["boxes_3d"]
scores = detection["scores_3d"].numpy()
labels = detection["labels_3d"].numpy()
box_gravity_center = box3d.gravity_center.numpy()
box_dims = box3d.dims.numpy()
box_yaw = box3d.yaw.numpy()
# TODO: check whether this is necessary
# with dir_offset & dir_limit in the head
box_yaw = -box_yaw - np.pi / 2
box_list = []
for i in range(len(box3d)):
quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i])
velocity = (*box3d.tensor[i, 7:9], 0.0)
# velo_val = np.linalg.norm(box3d[i, 7:9])
# velo_ori = box3d[i, 6]
# velocity = (
# velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0)
box = NuScenesBox(
box_gravity_center[i],
box_dims[i],
quat,
label=labels[i],
score=scores[i],
velocity=velocity,
)
box_list.append(box)
return box_list
def lidar_nusc_box_to_global(
info, boxes, classes, eval_configs, eval_version="detection_cvpr_2019"
):
"""Convert the box from ego to global coordinate.
Args:
info (dict): Info for a specific sample data, including the
calibration information.
boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes.
classes (list[str]): Mapped classes in the evaluation.
eval_configs : Evaluation configuration object.
eval_version (str): Evaluation version.
Default: 'detection_cvpr_2019'
Returns:
list: List of standard NuScenesBoxes in the global
coordinate.
"""
box_list = []
for box in boxes:
# Move box to ego vehicle coord system
box.rotate(pyquaternion.Quaternion(info["lidar2ego_rotation"]))
box.translate(np.array(info["lidar2ego_translation"]))
# filter det in ego.
cls_range_map = eval_configs.class_range
radius = np.linalg.norm(box.center[:2], 2)
det_range = cls_range_map[classes[box.label]]
if radius > det_range:
continue
# Move box to global coord system
box.rotate(pyquaternion.Quaternion(info["ego2global_rotation"]))
box.translate(np.array(info["ego2global_translation"]))
box_list.append(box)
return box_list
| import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from .custom_3d import Custom3DDataset
@DATASETS.register_module()
class NuScenesDataset(Custom3DDataset):
r"""NuScenes Dataset.
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `NuScenes Dataset <https://www.nuscenes.org/download>`_
for data downloading.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
dataset_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
with_velocity (bool, optional): Whether include velocity prediction
into the experiments. Defaults to True.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes.
- 'LiDAR': Box in LiDAR coordinates.
- 'Depth': Box in depth coordinates, usually for indoor dataset.
- 'Camera': Box in camera coordinates.
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
eval_version (bool, optional): Configuration version of evaluation.
Defaults to 'detection_cvpr_2019'.
use_valid_flag (bool): Whether to use `use_valid_flag` key in the info
file as mask to filter gt_boxes and gt_names. Defaults to False.
"""
NameMapping = {
"movable_object.barrier": "barrier",
"vehicle.bicycle": "bicycle",
"vehicle.bus.bendy": "bus",
"vehicle.bus.rigid": "bus",
"vehicle.car": "car",
"vehicle.construction": "construction_vehicle",
"vehicle.motorcycle": "motorcycle",
"human.pedestrian.adult": "pedestrian",
"human.pedestrian.child": "pedestrian",
"human.pedestrian.construction_worker": "pedestrian",
"human.pedestrian.police_officer": "pedestrian",
"movable_object.trafficcone": "traffic_cone",
"vehicle.trailer": "trailer",
"vehicle.truck": "truck",
}
DefaultAttribute = {
"car": "vehicle.parked",
"pedestrian": "pedestrian.moving",
"trailer": "vehicle.parked",
"truck": "vehicle.parked",
"bus": "vehicle.moving",
"motorcycle": "cycle.without_rider",
"construction_vehicle": "vehicle.parked",
"bicycle": "cycle.without_rider",
"barrier": "",
"traffic_cone": "",
}
AttrMapping = {
"cycle.with_rider": 0,
"cycle.without_rider": 1,
"pedestrian.moving": 2,
"pedestrian.standing": 3,
"pedestrian.sitting_lying_down": 4,
"vehicle.moving": 5,
"vehicle.parked": 6,
"vehicle.stopped": 7,
}
AttrMapping_rev = [
"cycle.with_rider",
"cycle.without_rider",
"pedestrian.moving",
"pedestrian.standing",
"pedestrian.sitting_lying_down",
"vehicle.moving",
"vehicle.parked",
"vehicle.stopped",
]
# https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa
ErrNameMapping = {
"trans_err": "mATE",
"scale_err": "mASE",
"orient_err": "mAOE",
"vel_err": "mAVE",
"attr_err": "mAAE",
}
CLASSES = (
"car",
"truck",
"trailer",
"bus",
"construction_vehicle",
"bicycle",
"motorcycle",
"pedestrian",
"traffic_cone",
"barrier",
)
def __init__(
self,
ann_file,
pipeline=None,
dataset_root=None,
object_classes=None,
map_classes=None,
load_interval=1,
with_velocity=True,
modality=None,
box_type_3d="LiDAR",
filter_empty_gt=True,
test_mode=False,
eval_version="detection_cvpr_2019",
use_valid_flag=False,
) -> None:
self.load_interval = load_interval
self.use_valid_flag = use_valid_flag
super().__init__(
dataset_root=dataset_root,
ann_file=ann_file,
pipeline=pipeline,
classes=object_classes,
modality=modality,
box_type_3d=box_type_3d,
filter_empty_gt=filter_empty_gt,
test_mode=test_mode,
)
self.map_classes = map_classes
self.with_velocity = with_velocity
self.eval_version = eval_version
from nuscenes.eval.detection.config import config_factory
self.eval_detection_configs = config_factory(self.eval_version)
if self.modality is None:
self.modality = dict(
use_camera=False,
use_lidar=True,
use_radar=False,
use_map=False,
use_external=False,
)
def get_cat_ids(self, idx):
"""Get category distribution of single scene.
Args:
idx (int): Index of the data_info.
Returns:
dict[list]: for each category, if the current scene
contains such boxes, store a list containing idx,
otherwise, store empty list.
"""
info = self.data_infos[idx]
if self.use_valid_flag:
mask = info["valid_flag"]
gt_names = set(info["gt_names"][mask])
else:
gt_names = set(info["gt_names"])
cat_ids = []
for name in gt_names:
if name in self.CLASSES:
cat_ids.append(self.cat2id[name])
return cat_ids
def load_annotations(self, ann_file):
"""Load annotations from ann_file.
Args:
ann_file (str): Path of the annotation file.
Returns:
list[dict]: List of annotations sorted by timestamps.
"""
data = mmcv.load(ann_file)
data_infos = list(sorted(data["infos"], key=lambda e: e["timestamp"]))
data_infos = data_infos[:: self.load_interval]
self.metadata = data["metadata"]
self.version = self.metadata["version"]
return data_infos
def get_data_info(self, index: int) -> Dict[str, Any]:
info = self.data_infos[index]
data = dict(
token=info["token"],
lidar_path=info["lidar_path"],
sweeps=info["sweeps"],
timestamp=info["timestamp"],
location=info["location"],
)
# ego to global transform
ego2global = np.eye(4).astype(np.float32)
ego2global[:3, :3] = Quaternion(info["ego2global_rotation"]).rotation_matrix
ego2global[:3, 3] = info["ego2global_translation"]
data["ego2global"] = ego2global
# lidar to ego transform
lidar2ego = np.eye(4).astype(np.float32)
lidar2ego[:3, :3] = Quaternion(info["lidar2ego_rotation"]).rotation_matrix
lidar2ego[:3, 3] = info["lidar2ego_translation"]
data["lidar2ego"] = lidar2ego
if self.modality["use_camera"]:
data["image_paths"] = []
data["lidar2camera"] = []
data["lidar2image"] = []
data["camera2ego"] = []
data["camera_intrinsics"] = []
data["camera2lidar"] = []
for _, camera_info in info["cams"].items():
data["image_paths"].append(camera_info["data_path"])
# lidar to camera transform
lidar2camera_r = np.linalg.inv(camera_info["sensor2lidar_rotation"])
lidar2camera_t = (
camera_info["sensor2lidar_translation"] @ lidar2camera_r.T
)
lidar2camera_rt = np.eye(4).astype(np.float32)
lidar2camera_rt[:3, :3] = lidar2camera_r.T
lidar2camera_rt[3, :3] = -lidar2camera_t
data["lidar2camera"].append(lidar2camera_rt.T)
# camera intrinsics
camera_intrinsics = np.eye(4).astype(np.float32)
camera_intrinsics[:3, :3] = camera_info["camera_intrinsics"]
data["camera_intrinsics"].append(camera_intrinsics)
# lidar to image transform
lidar2image = camera_intrinsics @ lidar2camera_rt.T
data["lidar2image"].append(lidar2image)
# camera to ego transform
camera2ego = np.eye(4).astype(np.float32)
camera2ego[:3, :3] = Quaternion(
camera_info["sensor2ego_rotation"]
).rotation_matrix
camera2ego[:3, 3] = camera_info["sensor2ego_translation"]
data["camera2ego"].append(camera2ego)
# camera to lidar transform
camera2lidar = np.eye(4).astype(np.float32)
camera2lidar[:3, :3] = camera_info["sensor2lidar_rotation"]
camera2lidar[:3, 3] = camera_info["sensor2lidar_translation"]
data["camera2lidar"].append(camera2lidar)
annos = self.get_ann_info(index)
data["ann_info"] = annos
return data
def get_ann_info(self, index):
"""Get annotation info according to the given index.
Args:
index (int): Index of the annotation data to get.
Returns:
dict: Annotation information consists of the following keys:
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): \
3D ground truth bboxes
- gt_labels_3d (np.ndarray): Labels of ground truths.
- gt_names (list[str]): Class names of ground truths.
"""
info = self.data_infos[index]
# filter out bbox containing no points
if self.use_valid_flag:
mask = info["valid_flag"]
else:
mask = info["num_lidar_pts"] > 0
gt_bboxes_3d = info["gt_boxes"][mask]
gt_names_3d = info["gt_names"][mask]
gt_labels_3d = []
for cat in gt_names_3d:
if cat in self.CLASSES:
gt_labels_3d.append(self.CLASSES.index(cat))
else:
gt_labels_3d.append(-1)
gt_labels_3d = np.array(gt_labels_3d)
if self.with_velocity:
gt_velocity = info["gt_velocity"][mask]
nan_mask = np.isnan(gt_velocity[:, 0])
gt_velocity[nan_mask] = [0.0, 0.0]
gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1)
# the nuscenes box center is [0.5, 0.5, 0.5], we change it to be
# the same as KITTI (0.5, 0.5, 0)
# haotian: this is an important change: from 0.5, 0.5, 0.5 -> 0.5, 0.5, 0
gt_bboxes_3d = LiDARInstance3DBoxes(
gt_bboxes_3d, box_dim=gt_bboxes_3d.shape[-1], origin=(0.5, 0.5, 0)
).convert_to(self.box_mode_3d)
anns_results = dict(
gt_bboxes_3d=gt_bboxes_3d,
gt_labels_3d=gt_labels_3d,
gt_names=gt_names_3d,
)
return anns_results
def _format_bbox(self, results, jsonfile_prefix=None):
"""Convert the results to the standard format.
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str): The prefix of the output jsonfile.
You can specify the output directory/filename by
modifying the jsonfile_prefix. Default: None.
Returns:
str: Path of the output json file.
"""
nusc_annos = {}
mapped_class_names = self.CLASSES
print("Start to convert detection format...")
for sample_id, det in enumerate(mmcv.track_iter_progress(results)):
annos = []
boxes = output_to_nusc_box(det)
sample_token = self.data_infos[sample_id]["token"]
boxes = lidar_nusc_box_to_global(
self.data_infos[sample_id],
boxes,
mapped_class_names,
self.eval_detection_configs,
self.eval_version,
)
for i, box in enumerate(boxes):
name = mapped_class_names[box.label]
if np.sqrt(box.velocity[0] ** 2 + box.velocity[1] ** 2) > 0.2:
if name in [
"car",
"construction_vehicle",
"bus",
"truck",
"trailer",
]:
attr = "vehicle.moving"
elif name in ["bicycle", "motorcycle"]:
attr = "cycle.with_rider"
else:
attr = NuScenesDataset.DefaultAttribute[name]
else:
if name in ["pedestrian"]:
attr = "pedestrian.standing"
elif name in ["bus"]:
attr = "vehicle.stopped"
else:
attr = NuScenesDataset.DefaultAttribute[name]
nusc_anno = dict(
sample_token=sample_token,
translation=box.center.tolist(),
size=box.wlh.tolist(),
rotation=box.orientation.elements.tolist(),
velocity=box.velocity[:2].tolist(),
detection_name=name,
detection_score=box.score,
attribute_name=attr,
)
annos.append(nusc_anno)
nusc_annos[sample_token] = annos
nusc_submissions = {
"meta": self.modality,
"results": nusc_annos,
}
mmcv.mkdir_or_exist(jsonfile_prefix)
res_path = osp.join(jsonfile_prefix, "results_nusc.json")
print("Results writes to", res_path)
mmcv.dump(nusc_submissions, res_path)
return res_path
def _evaluate_single(
self,
result_path,
logger=None,
metric="bbox",
result_name="pts_bbox",
):
"""Evaluation for a single model in nuScenes protocol.
Args:
result_path (str): Path of the result file.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
metric (str): Metric name used for evaluation. Default: 'bbox'.
result_name (str): Result name in the metric prefix.
Default: 'pts_bbox'.
Returns:
dict: Dictionary of evaluation details.
"""
from nuscenes import NuScenes
from nuscenes.eval.detection.evaluate import DetectionEval
output_dir = osp.join(*osp.split(result_path)[:-1])
nusc = NuScenes(version=self.version, dataroot=self.dataset_root, verbose=False)
eval_set_map = {
"v1.0-mini": "mini_val",
"v1.0-trainval": "val",
}
nusc_eval = DetectionEval(
nusc,
config=self.eval_detection_configs,
result_path=result_path,
eval_set=eval_set_map[self.version],
output_dir=output_dir,
verbose=False,
)
nusc_eval.main(render_curves=False)
# record metrics
metrics = mmcv.load(osp.join(output_dir, "metrics_summary.json"))
detail = dict()
for name in self.CLASSES:
for k, v in metrics["label_aps"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_ap_dist_{}".format(name, k)] = val
for k, v in metrics["label_tp_errors"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_{}".format(name, k)] = val
for k, v in metrics["tp_errors"].items():
val = float("{:.4f}".format(v))
detail["object/{}".format(self.ErrNameMapping[k])] = val
detail["object/nds"] = metrics["nd_score"]
detail["object/map"] = metrics["mean_ap"]
return detail
def format_results(self, results, jsonfile_prefix=None):
"""Format the results to json (standard format for COCO evaluation).
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: Returns (result_files, tmp_dir), where `result_files` is a \
dict containing the json filepaths, `tmp_dir` is the temporal \
directory created for saving json files when \
`jsonfile_prefix` is not specified.
"""
assert isinstance(results, list), "results must be a list"
assert len(results) == len(
self
), "The length of results is not equal to the dataset len: {} != {}".format(
len(results), len(self)
)
if jsonfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
jsonfile_prefix = osp.join(tmp_dir.name, "results")
else:
tmp_dir = None
result_files = self._format_bbox(results, jsonfile_prefix)
return result_files, tmp_dir
def evaluate_map(self, results):
thresholds = torch.tensor([0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65])
num_classes = len(self.map_classes)
num_thresholds = len(thresholds)
tp = torch.zeros(num_classes, num_thresholds)
fp = torch.zeros(num_classes, num_thresholds)
fn = torch.zeros(num_classes, num_thresholds)
for result in results:
pred = result["masks_bev"]
label = result["gt_masks_bev"]
pred = pred.detach().reshape(num_classes, -1)
label = label.detach().bool().reshape(num_classes, -1)
pred = pred[:, :, None] >= thresholds
label = label[:, :, None]
tp += (pred & label).sum(dim=1)
fp += (pred & ~label).sum(dim=1)
fn += (~pred & label).sum(dim=1)
ious = tp / (tp + fp + fn + 1e-7)
metrics = {}
for index, name in enumerate(self.map_classes):
metrics[f"map/{name}/iou@max"] = ious[index].max().item()
for threshold, iou in zip(thresholds, ious[index]):
metrics[f"map/{name}/iou@{threshold.item():.2f}"] = iou.item()
metrics["map/mean/iou@max"] = ious.max(dim=1).values.mean().item()
return metrics
def evaluate(
self,
results,
metric="bbox",
jsonfile_prefix=None,
result_names=["pts_bbox"],
**kwargs,
):
"""Evaluation in nuScenes protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
dict[str, float]: Results of each evaluation metric.
"""
metrics = {}
if "masks_bev" in results[0]:
metrics.update(self.evaluate_map(results))
if "boxes_3d" in results[0]:
result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
if isinstance(result_files, dict):
for name in result_names:
print("Evaluating bboxes of {}".format(name))
ret_dict = self._evaluate_single(result_files[name])
metrics.update(ret_dict)
elif isinstance(result_files, str):
metrics.update(self._evaluate_single(result_files))
if tmp_dir is not None:
tmp_dir.cleanup()
return metrics
def output_to_nusc_box(detection):
"""Convert the output to the box class in the nuScenes.
Args:
detection (dict): Detection results.
- boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox.
- scores_3d (torch.Tensor): Detection scores.
- labels_3d (torch.Tensor): Predicted box labels.
Returns:
list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes.
"""
box3d = detection["boxes_3d"]
scores = detection["scores_3d"].numpy()
labels = detection["labels_3d"].numpy()
box_gravity_center = box3d.gravity_center.numpy()
box_dims = box3d.dims.numpy()
box_yaw = box3d.yaw.numpy()
# TODO: check whether this is necessary
# with dir_offset & dir_limit in the head
box_yaw = -box_yaw - np.pi / 2
box_list = []
for i in range(len(box3d)):
quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i])
velocity = (*box3d.tensor[i, 7:9], 0.0)
# velo_val = np.linalg.norm(box3d[i, 7:9])
# velo_ori = box3d[i, 6]
# velocity = (
# velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0)
box = NuScenesBox(
box_gravity_center[i],
box_dims[i],
quat,
label=labels[i],
score=scores[i],
velocity=velocity,
)
box_list.append(box)
return box_list
def lidar_nusc_box_to_global(
info, boxes, classes, eval_configs, eval_version="detection_cvpr_2019"
):
"""Convert the box from ego to global coordinate.
Args:
info (dict): Info for a specific sample data, including the
calibration information.
boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes.
classes (list[str]): Mapped classes in the evaluation.
eval_configs : Evaluation configuration object.
eval_version (str): Evaluation version.
Default: 'detection_cvpr_2019'
Returns:
list: List of standard NuScenesBoxes in the global
coordinate.
"""
box_list = []
for box in boxes:
# Move box to ego vehicle coord system
box.rotate(pyquaternion.Quaternion(info["lidar2ego_rotation"]))
box.translate(np.array(info["lidar2ego_translation"]))
# filter det in ego.
cls_range_map = eval_configs.class_range
radius = np.linalg.norm(box.center[:2], 2)
det_range = cls_range_map[classes[box.label]]
if radius > det_range:
continue
# Move box to global coord system
box.rotate(pyquaternion.Quaternion(info["ego2global_rotation"]))
box.translate(np.array(info["ego2global_translation"]))
box_list.append(box)
return box_list
| kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Could you double-check whether the other transformations are still actively being used? If not, we could remove them. | zhijian-liu | 1 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/datasets/nuscenes_dataset.py | import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from .custom_3d import Custom3DDataset
@DATASETS.register_module()
class NuScenesDataset(Custom3DDataset):
r"""NuScenes Dataset.
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `NuScenes Dataset <https://www.nuscenes.org/download>`_
for data downloading.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
dataset_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
with_velocity (bool, optional): Whether include velocity prediction
into the experiments. Defaults to True.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes.
- 'LiDAR': Box in LiDAR coordinates.
- 'Depth': Box in depth coordinates, usually for indoor dataset.
- 'Camera': Box in camera coordinates.
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
eval_version (bool, optional): Configuration version of evaluation.
Defaults to 'detection_cvpr_2019'.
use_valid_flag (bool): Whether to use `use_valid_flag` key in the info
file as mask to filter gt_boxes and gt_names. Defaults to False.
"""
NameMapping = {
"movable_object.barrier": "barrier",
"vehicle.bicycle": "bicycle",
"vehicle.bus.bendy": "bus",
"vehicle.bus.rigid": "bus",
"vehicle.car": "car",
"vehicle.construction": "construction_vehicle",
"vehicle.motorcycle": "motorcycle",
"human.pedestrian.adult": "pedestrian",
"human.pedestrian.child": "pedestrian",
"human.pedestrian.construction_worker": "pedestrian",
"human.pedestrian.police_officer": "pedestrian",
"movable_object.trafficcone": "traffic_cone",
"vehicle.trailer": "trailer",
"vehicle.truck": "truck",
}
DefaultAttribute = {
"car": "vehicle.parked",
"pedestrian": "pedestrian.moving",
"trailer": "vehicle.parked",
"truck": "vehicle.parked",
"bus": "vehicle.moving",
"motorcycle": "cycle.without_rider",
"construction_vehicle": "vehicle.parked",
"bicycle": "cycle.without_rider",
"barrier": "",
"traffic_cone": "",
}
AttrMapping = {
"cycle.with_rider": 0,
"cycle.without_rider": 1,
"pedestrian.moving": 2,
"pedestrian.standing": 3,
"pedestrian.sitting_lying_down": 4,
"vehicle.moving": 5,
"vehicle.parked": 6,
"vehicle.stopped": 7,
}
AttrMapping_rev = [
"cycle.with_rider",
"cycle.without_rider",
"pedestrian.moving",
"pedestrian.standing",
"pedestrian.sitting_lying_down",
"vehicle.moving",
"vehicle.parked",
"vehicle.stopped",
]
# https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa
ErrNameMapping = {
"trans_err": "mATE",
"scale_err": "mASE",
"orient_err": "mAOE",
"vel_err": "mAVE",
"attr_err": "mAAE",
}
CLASSES = (
"car",
"truck",
"trailer",
"bus",
"construction_vehicle",
"bicycle",
"motorcycle",
"pedestrian",
"traffic_cone",
"barrier",
)
def __init__(
self,
ann_file,
pipeline=None,
dataset_root=None,
object_classes=None,
map_classes=None,
load_interval=1,
with_velocity=True,
modality=None,
box_type_3d="LiDAR",
filter_empty_gt=True,
test_mode=False,
eval_version="detection_cvpr_2019",
use_valid_flag=False,
) -> None:
self.load_interval = load_interval
self.use_valid_flag = use_valid_flag
super().__init__(
dataset_root=dataset_root,
ann_file=ann_file,
pipeline=pipeline,
classes=object_classes,
modality=modality,
box_type_3d=box_type_3d,
filter_empty_gt=filter_empty_gt,
test_mode=test_mode,
)
self.map_classes = map_classes
self.with_velocity = with_velocity
self.eval_version = eval_version
from nuscenes.eval.detection.config import config_factory
self.eval_detection_configs = config_factory(self.eval_version)
if self.modality is None:
self.modality = dict(
use_camera=False,
use_lidar=True,
use_radar=False,
use_map=False,
use_external=False,
)
def get_cat_ids(self, idx):
"""Get category distribution of single scene.
Args:
idx (int): Index of the data_info.
Returns:
dict[list]: for each category, if the current scene
contains such boxes, store a list containing idx,
otherwise, store empty list.
"""
info = self.data_infos[idx]
if self.use_valid_flag:
mask = info["valid_flag"]
gt_names = set(info["gt_names"][mask])
else:
gt_names = set(info["gt_names"])
cat_ids = []
for name in gt_names:
if name in self.CLASSES:
cat_ids.append(self.cat2id[name])
return cat_ids
def load_annotations(self, ann_file):
"""Load annotations from ann_file.
Args:
ann_file (str): Path of the annotation file.
Returns:
list[dict]: List of annotations sorted by timestamps.
"""
data = mmcv.load(ann_file)
data_infos = list(sorted(data["infos"], key=lambda e: e["timestamp"]))
data_infos = data_infos[:: self.load_interval]
self.metadata = data["metadata"]
self.version = self.metadata["version"]
return data_infos
def get_data_info(self, index: int) -> Dict[str, Any]:
info = self.data_infos[index]
data = dict(
token=info["token"],
sample_idx=info['token'],
lidar_path=info["lidar_path"],
sweeps=info["sweeps"],
timestamp=info["timestamp"],
location=info["location"],
)
# ego to global transform
ego2global = np.eye(4).astype(np.float32)
ego2global[:3, :3] = Quaternion(info["ego2global_rotation"]).rotation_matrix
ego2global[:3, 3] = info["ego2global_translation"]
data["ego2global"] = ego2global
# lidar to ego transform
lidar2ego = np.eye(4).astype(np.float32)
lidar2ego[:3, :3] = Quaternion(info["lidar2ego_rotation"]).rotation_matrix
lidar2ego[:3, 3] = info["lidar2ego_translation"]
data["lidar2ego"] = lidar2ego
if self.modality["use_camera"]:
data["image_paths"] = []
data["lidar2camera"] = []
data["lidar2image"] = []
data["camera2ego"] = []
data["camera_intrinsics"] = []
for _, camera_info in info["cams"].items():
data["image_paths"].append(camera_info["data_path"])
# lidar to camera transform
lidar2camera_r = np.linalg.inv(camera_info["sensor2lidar_rotation"])
lidar2camera_t = (
camera_info["sensor2lidar_translation"] @ lidar2camera_r.T
)
lidar2camera_rt = np.eye(4).astype(np.float32)
lidar2camera_rt[:3, :3] = lidar2camera_r.T
lidar2camera_rt[3, :3] = -lidar2camera_t
data["lidar2camera"].append(lidar2camera_rt.T)
# camera intrinsics
camera_intrinsics = np.eye(4).astype(np.float32)
camera_intrinsics[:3, :3] = camera_info["camera_intrinsics"]
data["camera_intrinsics"].append(camera_intrinsics)
# lidar to image transform
lidar2image = camera_intrinsics @ lidar2camera_rt.T
data["lidar2image"].append(lidar2image)
# camera to ego transform
camera2ego = np.eye(4).astype(np.float32)
camera2ego[:3, :3] = Quaternion(
camera_info["sensor2ego_rotation"]
).rotation_matrix
camera2ego[:3, 3] = camera_info["sensor2ego_translation"]
data["camera2ego"].append(camera2ego)
# TODO (Haotian): test set submission.
annos = self.get_ann_info(index)
data["ann_info"] = annos
return data
def get_ann_info(self, index):
"""Get annotation info according to the given index.
Args:
index (int): Index of the annotation data to get.
Returns:
dict: Annotation information consists of the following keys:
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): \
3D ground truth bboxes
- gt_labels_3d (np.ndarray): Labels of ground truths.
- gt_names (list[str]): Class names of ground truths.
"""
info = self.data_infos[index]
# filter out bbox containing no points
if self.use_valid_flag:
mask = info["valid_flag"]
else:
mask = info["num_lidar_pts"] > 0
gt_bboxes_3d = info["gt_boxes"][mask]
gt_names_3d = info["gt_names"][mask]
gt_labels_3d = []
for cat in gt_names_3d:
if cat in self.CLASSES:
gt_labels_3d.append(self.CLASSES.index(cat))
else:
gt_labels_3d.append(-1)
gt_labels_3d = np.array(gt_labels_3d)
if self.with_velocity:
gt_velocity = info["gt_velocity"][mask]
nan_mask = np.isnan(gt_velocity[:, 0])
gt_velocity[nan_mask] = [0.0, 0.0]
gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1)
# the nuscenes box center is [0.5, 0.5, 0.5], we change it to be
# the same as KITTI (0.5, 0.5, 0)
# haotian: this is an important change: from 0.5, 0.5, 0.5 -> 0.5, 0.5, 0
gt_bboxes_3d = LiDARInstance3DBoxes(
gt_bboxes_3d, box_dim=gt_bboxes_3d.shape[-1], origin=(0.5, 0.5, 0)
).convert_to(self.box_mode_3d)
anns_results = dict(
gt_bboxes_3d=gt_bboxes_3d,
gt_labels_3d=gt_labels_3d,
gt_names=gt_names_3d,
)
return anns_results
def _format_bbox(self, results, jsonfile_prefix=None):
"""Convert the results to the standard format.
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str): The prefix of the output jsonfile.
You can specify the output directory/filename by
modifying the jsonfile_prefix. Default: None.
Returns:
str: Path of the output json file.
"""
nusc_annos = {}
mapped_class_names = self.CLASSES
print("Start to convert detection format...")
for sample_id, det in enumerate(mmcv.track_iter_progress(results)):
annos = []
boxes = output_to_nusc_box(det)
sample_token = self.data_infos[sample_id]["token"]
boxes = lidar_nusc_box_to_global(
self.data_infos[sample_id],
boxes,
mapped_class_names,
self.eval_detection_configs,
self.eval_version,
)
for i, box in enumerate(boxes):
name = mapped_class_names[box.label]
if np.sqrt(box.velocity[0] ** 2 + box.velocity[1] ** 2) > 0.2:
if name in [
"car",
"construction_vehicle",
"bus",
"truck",
"trailer",
]:
attr = "vehicle.moving"
elif name in ["bicycle", "motorcycle"]:
attr = "cycle.with_rider"
else:
attr = NuScenesDataset.DefaultAttribute[name]
else:
if name in ["pedestrian"]:
attr = "pedestrian.standing"
elif name in ["bus"]:
attr = "vehicle.stopped"
else:
attr = NuScenesDataset.DefaultAttribute[name]
nusc_anno = dict(
sample_token=sample_token,
translation=box.center.tolist(),
size=box.wlh.tolist(),
rotation=box.orientation.elements.tolist(),
velocity=box.velocity[:2].tolist(),
detection_name=name,
detection_score=box.score,
attribute_name=attr,
)
annos.append(nusc_anno)
nusc_annos[sample_token] = annos
nusc_submissions = {
"meta": self.modality,
"results": nusc_annos,
}
mmcv.mkdir_or_exist(jsonfile_prefix)
res_path = osp.join(jsonfile_prefix, "results_nusc.json")
print("Results writes to", res_path)
mmcv.dump(nusc_submissions, res_path)
return res_path
def _evaluate_single(
self,
result_path,
logger=None,
metric="bbox",
result_name="pts_bbox",
):
"""Evaluation for a single model in nuScenes protocol.
Args:
result_path (str): Path of the result file.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
metric (str): Metric name used for evaluation. Default: 'bbox'.
result_name (str): Result name in the metric prefix.
Default: 'pts_bbox'.
Returns:
dict: Dictionary of evaluation details.
"""
from nuscenes import NuScenes
from nuscenes.eval.detection.evaluate import DetectionEval
output_dir = osp.join(*osp.split(result_path)[:-1])
nusc = NuScenes(version=self.version, dataroot=self.dataset_root, verbose=False)
eval_set_map = {
"v1.0-mini": "mini_val",
"v1.0-trainval": "val",
}
nusc_eval = DetectionEval(
nusc,
config=self.eval_detection_configs,
result_path=result_path,
eval_set=eval_set_map[self.version],
output_dir=output_dir,
verbose=False,
)
nusc_eval.main(render_curves=False)
# record metrics
metrics = mmcv.load(osp.join(output_dir, "metrics_summary.json"))
detail = dict()
for name in self.CLASSES:
for k, v in metrics["label_aps"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_ap_dist_{}".format(name, k)] = val
for k, v in metrics["label_tp_errors"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_{}".format(name, k)] = val
for k, v in metrics["tp_errors"].items():
val = float("{:.4f}".format(v))
detail["object/{}".format(self.ErrNameMapping[k])] = val
detail["object/nds"] = metrics["nd_score"]
detail["object/map"] = metrics["mean_ap"]
return detail
def format_results(self, results, jsonfile_prefix=None):
"""Format the results to json (standard format for COCO evaluation).
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: Returns (result_files, tmp_dir), where `result_files` is a \
dict containing the json filepaths, `tmp_dir` is the temporal \
directory created for saving json files when \
`jsonfile_prefix` is not specified.
"""
assert isinstance(results, list), "results must be a list"
assert len(results) == len(
self
), "The length of results is not equal to the dataset len: {} != {}".format(
len(results), len(self)
)
if jsonfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
jsonfile_prefix = osp.join(tmp_dir.name, "results")
else:
tmp_dir = None
result_files = self._format_bbox(results, jsonfile_prefix)
return result_files, tmp_dir
def evaluate_map(self, results):
thresholds = torch.tensor([0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65])
num_classes = len(self.map_classes)
num_thresholds = len(thresholds)
tp = torch.zeros(num_classes, num_thresholds)
fp = torch.zeros(num_classes, num_thresholds)
fn = torch.zeros(num_classes, num_thresholds)
for result in results:
pred = result["masks_bev"]
label = result["gt_masks_bev"]
pred = pred.detach().reshape(num_classes, -1)
label = label.detach().bool().reshape(num_classes, -1)
pred = pred[:, :, None] >= thresholds
label = label[:, :, None]
tp += (pred & label).sum(dim=1)
fp += (pred & ~label).sum(dim=1)
fn += (~pred & label).sum(dim=1)
ious = tp / (tp + fp + fn + 1e-7)
metrics = {}
for index, name in enumerate(self.map_classes):
metrics[f"map/{name}/iou@max"] = ious[index].max().item()
for threshold, iou in zip(thresholds, ious[index]):
metrics[f"map/{name}/iou@{threshold.item():.2f}"] = iou.item()
metrics["map/mean/iou@max"] = ious.max(dim=1).values.mean().item()
return metrics
def evaluate(
self,
results,
metric="bbox",
jsonfile_prefix=None,
result_names=["pts_bbox"],
**kwargs,
):
"""Evaluation in nuScenes protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
dict[str, float]: Results of each evaluation metric.
"""
metrics = {}
if "masks_bev" in results[0]:
metrics.update(self.evaluate_map(results))
if "boxes_3d" in results[0]:
result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
if isinstance(result_files, dict):
for name in result_names:
print("Evaluating bboxes of {}".format(name))
ret_dict = self._evaluate_single(result_files[name])
metrics.update(ret_dict)
elif isinstance(result_files, str):
metrics.update(self._evaluate_single(result_files))
if tmp_dir is not None:
tmp_dir.cleanup()
return metrics
def output_to_nusc_box(detection):
"""Convert the output to the box class in the nuScenes.
Args:
detection (dict): Detection results.
- boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox.
- scores_3d (torch.Tensor): Detection scores.
- labels_3d (torch.Tensor): Predicted box labels.
Returns:
list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes.
"""
box3d = detection["boxes_3d"]
scores = detection["scores_3d"].numpy()
labels = detection["labels_3d"].numpy()
box_gravity_center = box3d.gravity_center.numpy()
box_dims = box3d.dims.numpy()
box_yaw = box3d.yaw.numpy()
# TODO: check whether this is necessary
# with dir_offset & dir_limit in the head
box_yaw = -box_yaw - np.pi / 2
box_list = []
for i in range(len(box3d)):
quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i])
velocity = (*box3d.tensor[i, 7:9], 0.0)
# velo_val = np.linalg.norm(box3d[i, 7:9])
# velo_ori = box3d[i, 6]
# velocity = (
# velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0)
box = NuScenesBox(
box_gravity_center[i],
box_dims[i],
quat,
label=labels[i],
score=scores[i],
velocity=velocity,
)
box_list.append(box)
return box_list
def lidar_nusc_box_to_global(
info, boxes, classes, eval_configs, eval_version="detection_cvpr_2019"
):
"""Convert the box from ego to global coordinate.
Args:
info (dict): Info for a specific sample data, including the
calibration information.
boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes.
classes (list[str]): Mapped classes in the evaluation.
eval_configs : Evaluation configuration object.
eval_version (str): Evaluation version.
Default: 'detection_cvpr_2019'
Returns:
list: List of standard NuScenesBoxes in the global
coordinate.
"""
box_list = []
for box in boxes:
# Move box to ego vehicle coord system
box.rotate(pyquaternion.Quaternion(info["lidar2ego_rotation"]))
box.translate(np.array(info["lidar2ego_translation"]))
# filter det in ego.
cls_range_map = eval_configs.class_range
radius = np.linalg.norm(box.center[:2], 2)
det_range = cls_range_map[classes[box.label]]
if radius > det_range:
continue
# Move box to global coord system
box.rotate(pyquaternion.Quaternion(info["ego2global_rotation"]))
box.translate(np.array(info["ego2global_translation"]))
box_list.append(box)
return box_list
| import tempfile
from os import path as osp
from typing import Any, Dict
import mmcv
import numpy as np
import pyquaternion
import torch
from nuscenes.utils.data_classes import Box as NuScenesBox
from pyquaternion import Quaternion
from mmdet.datasets import DATASETS
from ..core.bbox import LiDARInstance3DBoxes
from .custom_3d import Custom3DDataset
@DATASETS.register_module()
class NuScenesDataset(Custom3DDataset):
r"""NuScenes Dataset.
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `NuScenes Dataset <https://www.nuscenes.org/download>`_
for data downloading.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
dataset_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
with_velocity (bool, optional): Whether include velocity prediction
into the experiments. Defaults to True.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes.
- 'LiDAR': Box in LiDAR coordinates.
- 'Depth': Box in depth coordinates, usually for indoor dataset.
- 'Camera': Box in camera coordinates.
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
eval_version (bool, optional): Configuration version of evaluation.
Defaults to 'detection_cvpr_2019'.
use_valid_flag (bool): Whether to use `use_valid_flag` key in the info
file as mask to filter gt_boxes and gt_names. Defaults to False.
"""
NameMapping = {
"movable_object.barrier": "barrier",
"vehicle.bicycle": "bicycle",
"vehicle.bus.bendy": "bus",
"vehicle.bus.rigid": "bus",
"vehicle.car": "car",
"vehicle.construction": "construction_vehicle",
"vehicle.motorcycle": "motorcycle",
"human.pedestrian.adult": "pedestrian",
"human.pedestrian.child": "pedestrian",
"human.pedestrian.construction_worker": "pedestrian",
"human.pedestrian.police_officer": "pedestrian",
"movable_object.trafficcone": "traffic_cone",
"vehicle.trailer": "trailer",
"vehicle.truck": "truck",
}
DefaultAttribute = {
"car": "vehicle.parked",
"pedestrian": "pedestrian.moving",
"trailer": "vehicle.parked",
"truck": "vehicle.parked",
"bus": "vehicle.moving",
"motorcycle": "cycle.without_rider",
"construction_vehicle": "vehicle.parked",
"bicycle": "cycle.without_rider",
"barrier": "",
"traffic_cone": "",
}
AttrMapping = {
"cycle.with_rider": 0,
"cycle.without_rider": 1,
"pedestrian.moving": 2,
"pedestrian.standing": 3,
"pedestrian.sitting_lying_down": 4,
"vehicle.moving": 5,
"vehicle.parked": 6,
"vehicle.stopped": 7,
}
AttrMapping_rev = [
"cycle.with_rider",
"cycle.without_rider",
"pedestrian.moving",
"pedestrian.standing",
"pedestrian.sitting_lying_down",
"vehicle.moving",
"vehicle.parked",
"vehicle.stopped",
]
# https://github.com/nutonomy/nuscenes-devkit/blob/57889ff20678577025326cfc24e57424a829be0a/python-sdk/nuscenes/eval/detection/evaluate.py#L222 # noqa
ErrNameMapping = {
"trans_err": "mATE",
"scale_err": "mASE",
"orient_err": "mAOE",
"vel_err": "mAVE",
"attr_err": "mAAE",
}
CLASSES = (
"car",
"truck",
"trailer",
"bus",
"construction_vehicle",
"bicycle",
"motorcycle",
"pedestrian",
"traffic_cone",
"barrier",
)
def __init__(
self,
ann_file,
pipeline=None,
dataset_root=None,
object_classes=None,
map_classes=None,
load_interval=1,
with_velocity=True,
modality=None,
box_type_3d="LiDAR",
filter_empty_gt=True,
test_mode=False,
eval_version="detection_cvpr_2019",
use_valid_flag=False,
) -> None:
self.load_interval = load_interval
self.use_valid_flag = use_valid_flag
super().__init__(
dataset_root=dataset_root,
ann_file=ann_file,
pipeline=pipeline,
classes=object_classes,
modality=modality,
box_type_3d=box_type_3d,
filter_empty_gt=filter_empty_gt,
test_mode=test_mode,
)
self.map_classes = map_classes
self.with_velocity = with_velocity
self.eval_version = eval_version
from nuscenes.eval.detection.config import config_factory
self.eval_detection_configs = config_factory(self.eval_version)
if self.modality is None:
self.modality = dict(
use_camera=False,
use_lidar=True,
use_radar=False,
use_map=False,
use_external=False,
)
def get_cat_ids(self, idx):
"""Get category distribution of single scene.
Args:
idx (int): Index of the data_info.
Returns:
dict[list]: for each category, if the current scene
contains such boxes, store a list containing idx,
otherwise, store empty list.
"""
info = self.data_infos[idx]
if self.use_valid_flag:
mask = info["valid_flag"]
gt_names = set(info["gt_names"][mask])
else:
gt_names = set(info["gt_names"])
cat_ids = []
for name in gt_names:
if name in self.CLASSES:
cat_ids.append(self.cat2id[name])
return cat_ids
def load_annotations(self, ann_file):
"""Load annotations from ann_file.
Args:
ann_file (str): Path of the annotation file.
Returns:
list[dict]: List of annotations sorted by timestamps.
"""
data = mmcv.load(ann_file)
data_infos = list(sorted(data["infos"], key=lambda e: e["timestamp"]))
data_infos = data_infos[:: self.load_interval]
self.metadata = data["metadata"]
self.version = self.metadata["version"]
return data_infos
def get_data_info(self, index: int) -> Dict[str, Any]:
info = self.data_infos[index]
data = dict(
token=info["token"],
lidar_path=info["lidar_path"],
sweeps=info["sweeps"],
timestamp=info["timestamp"],
location=info["location"],
)
# ego to global transform
ego2global = np.eye(4).astype(np.float32)
ego2global[:3, :3] = Quaternion(info["ego2global_rotation"]).rotation_matrix
ego2global[:3, 3] = info["ego2global_translation"]
data["ego2global"] = ego2global
# lidar to ego transform
lidar2ego = np.eye(4).astype(np.float32)
lidar2ego[:3, :3] = Quaternion(info["lidar2ego_rotation"]).rotation_matrix
lidar2ego[:3, 3] = info["lidar2ego_translation"]
data["lidar2ego"] = lidar2ego
if self.modality["use_camera"]:
data["image_paths"] = []
data["lidar2camera"] = []
data["lidar2image"] = []
data["camera2ego"] = []
data["camera_intrinsics"] = []
data["camera2lidar"] = []
for _, camera_info in info["cams"].items():
data["image_paths"].append(camera_info["data_path"])
# lidar to camera transform
lidar2camera_r = np.linalg.inv(camera_info["sensor2lidar_rotation"])
lidar2camera_t = (
camera_info["sensor2lidar_translation"] @ lidar2camera_r.T
)
lidar2camera_rt = np.eye(4).astype(np.float32)
lidar2camera_rt[:3, :3] = lidar2camera_r.T
lidar2camera_rt[3, :3] = -lidar2camera_t
data["lidar2camera"].append(lidar2camera_rt.T)
# camera intrinsics
camera_intrinsics = np.eye(4).astype(np.float32)
camera_intrinsics[:3, :3] = camera_info["camera_intrinsics"]
data["camera_intrinsics"].append(camera_intrinsics)
# lidar to image transform
lidar2image = camera_intrinsics @ lidar2camera_rt.T
data["lidar2image"].append(lidar2image)
# camera to ego transform
camera2ego = np.eye(4).astype(np.float32)
camera2ego[:3, :3] = Quaternion(
camera_info["sensor2ego_rotation"]
).rotation_matrix
camera2ego[:3, 3] = camera_info["sensor2ego_translation"]
data["camera2ego"].append(camera2ego)
# camera to lidar transform
camera2lidar = np.eye(4).astype(np.float32)
camera2lidar[:3, :3] = camera_info["sensor2lidar_rotation"]
camera2lidar[:3, 3] = camera_info["sensor2lidar_translation"]
data["camera2lidar"].append(camera2lidar)
annos = self.get_ann_info(index)
data["ann_info"] = annos
return data
def get_ann_info(self, index):
"""Get annotation info according to the given index.
Args:
index (int): Index of the annotation data to get.
Returns:
dict: Annotation information consists of the following keys:
- gt_bboxes_3d (:obj:`LiDARInstance3DBoxes`): \
3D ground truth bboxes
- gt_labels_3d (np.ndarray): Labels of ground truths.
- gt_names (list[str]): Class names of ground truths.
"""
info = self.data_infos[index]
# filter out bbox containing no points
if self.use_valid_flag:
mask = info["valid_flag"]
else:
mask = info["num_lidar_pts"] > 0
gt_bboxes_3d = info["gt_boxes"][mask]
gt_names_3d = info["gt_names"][mask]
gt_labels_3d = []
for cat in gt_names_3d:
if cat in self.CLASSES:
gt_labels_3d.append(self.CLASSES.index(cat))
else:
gt_labels_3d.append(-1)
gt_labels_3d = np.array(gt_labels_3d)
if self.with_velocity:
gt_velocity = info["gt_velocity"][mask]
nan_mask = np.isnan(gt_velocity[:, 0])
gt_velocity[nan_mask] = [0.0, 0.0]
gt_bboxes_3d = np.concatenate([gt_bboxes_3d, gt_velocity], axis=-1)
# the nuscenes box center is [0.5, 0.5, 0.5], we change it to be
# the same as KITTI (0.5, 0.5, 0)
# haotian: this is an important change: from 0.5, 0.5, 0.5 -> 0.5, 0.5, 0
gt_bboxes_3d = LiDARInstance3DBoxes(
gt_bboxes_3d, box_dim=gt_bboxes_3d.shape[-1], origin=(0.5, 0.5, 0)
).convert_to(self.box_mode_3d)
anns_results = dict(
gt_bboxes_3d=gt_bboxes_3d,
gt_labels_3d=gt_labels_3d,
gt_names=gt_names_3d,
)
return anns_results
def _format_bbox(self, results, jsonfile_prefix=None):
"""Convert the results to the standard format.
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str): The prefix of the output jsonfile.
You can specify the output directory/filename by
modifying the jsonfile_prefix. Default: None.
Returns:
str: Path of the output json file.
"""
nusc_annos = {}
mapped_class_names = self.CLASSES
print("Start to convert detection format...")
for sample_id, det in enumerate(mmcv.track_iter_progress(results)):
annos = []
boxes = output_to_nusc_box(det)
sample_token = self.data_infos[sample_id]["token"]
boxes = lidar_nusc_box_to_global(
self.data_infos[sample_id],
boxes,
mapped_class_names,
self.eval_detection_configs,
self.eval_version,
)
for i, box in enumerate(boxes):
name = mapped_class_names[box.label]
if np.sqrt(box.velocity[0] ** 2 + box.velocity[1] ** 2) > 0.2:
if name in [
"car",
"construction_vehicle",
"bus",
"truck",
"trailer",
]:
attr = "vehicle.moving"
elif name in ["bicycle", "motorcycle"]:
attr = "cycle.with_rider"
else:
attr = NuScenesDataset.DefaultAttribute[name]
else:
if name in ["pedestrian"]:
attr = "pedestrian.standing"
elif name in ["bus"]:
attr = "vehicle.stopped"
else:
attr = NuScenesDataset.DefaultAttribute[name]
nusc_anno = dict(
sample_token=sample_token,
translation=box.center.tolist(),
size=box.wlh.tolist(),
rotation=box.orientation.elements.tolist(),
velocity=box.velocity[:2].tolist(),
detection_name=name,
detection_score=box.score,
attribute_name=attr,
)
annos.append(nusc_anno)
nusc_annos[sample_token] = annos
nusc_submissions = {
"meta": self.modality,
"results": nusc_annos,
}
mmcv.mkdir_or_exist(jsonfile_prefix)
res_path = osp.join(jsonfile_prefix, "results_nusc.json")
print("Results writes to", res_path)
mmcv.dump(nusc_submissions, res_path)
return res_path
def _evaluate_single(
self,
result_path,
logger=None,
metric="bbox",
result_name="pts_bbox",
):
"""Evaluation for a single model in nuScenes protocol.
Args:
result_path (str): Path of the result file.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
metric (str): Metric name used for evaluation. Default: 'bbox'.
result_name (str): Result name in the metric prefix.
Default: 'pts_bbox'.
Returns:
dict: Dictionary of evaluation details.
"""
from nuscenes import NuScenes
from nuscenes.eval.detection.evaluate import DetectionEval
output_dir = osp.join(*osp.split(result_path)[:-1])
nusc = NuScenes(version=self.version, dataroot=self.dataset_root, verbose=False)
eval_set_map = {
"v1.0-mini": "mini_val",
"v1.0-trainval": "val",
}
nusc_eval = DetectionEval(
nusc,
config=self.eval_detection_configs,
result_path=result_path,
eval_set=eval_set_map[self.version],
output_dir=output_dir,
verbose=False,
)
nusc_eval.main(render_curves=False)
# record metrics
metrics = mmcv.load(osp.join(output_dir, "metrics_summary.json"))
detail = dict()
for name in self.CLASSES:
for k, v in metrics["label_aps"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_ap_dist_{}".format(name, k)] = val
for k, v in metrics["label_tp_errors"][name].items():
val = float("{:.4f}".format(v))
detail["object/{}_{}".format(name, k)] = val
for k, v in metrics["tp_errors"].items():
val = float("{:.4f}".format(v))
detail["object/{}".format(self.ErrNameMapping[k])] = val
detail["object/nds"] = metrics["nd_score"]
detail["object/map"] = metrics["mean_ap"]
return detail
def format_results(self, results, jsonfile_prefix=None):
"""Format the results to json (standard format for COCO evaluation).
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
tuple: Returns (result_files, tmp_dir), where `result_files` is a \
dict containing the json filepaths, `tmp_dir` is the temporal \
directory created for saving json files when \
`jsonfile_prefix` is not specified.
"""
assert isinstance(results, list), "results must be a list"
assert len(results) == len(
self
), "The length of results is not equal to the dataset len: {} != {}".format(
len(results), len(self)
)
if jsonfile_prefix is None:
tmp_dir = tempfile.TemporaryDirectory()
jsonfile_prefix = osp.join(tmp_dir.name, "results")
else:
tmp_dir = None
result_files = self._format_bbox(results, jsonfile_prefix)
return result_files, tmp_dir
def evaluate_map(self, results):
thresholds = torch.tensor([0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65])
num_classes = len(self.map_classes)
num_thresholds = len(thresholds)
tp = torch.zeros(num_classes, num_thresholds)
fp = torch.zeros(num_classes, num_thresholds)
fn = torch.zeros(num_classes, num_thresholds)
for result in results:
pred = result["masks_bev"]
label = result["gt_masks_bev"]
pred = pred.detach().reshape(num_classes, -1)
label = label.detach().bool().reshape(num_classes, -1)
pred = pred[:, :, None] >= thresholds
label = label[:, :, None]
tp += (pred & label).sum(dim=1)
fp += (pred & ~label).sum(dim=1)
fn += (~pred & label).sum(dim=1)
ious = tp / (tp + fp + fn + 1e-7)
metrics = {}
for index, name in enumerate(self.map_classes):
metrics[f"map/{name}/iou@max"] = ious[index].max().item()
for threshold, iou in zip(thresholds, ious[index]):
metrics[f"map/{name}/iou@{threshold.item():.2f}"] = iou.item()
metrics["map/mean/iou@max"] = ious.max(dim=1).values.mean().item()
return metrics
def evaluate(
self,
results,
metric="bbox",
jsonfile_prefix=None,
result_names=["pts_bbox"],
**kwargs,
):
"""Evaluation in nuScenes protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
Returns:
dict[str, float]: Results of each evaluation metric.
"""
metrics = {}
if "masks_bev" in results[0]:
metrics.update(self.evaluate_map(results))
if "boxes_3d" in results[0]:
result_files, tmp_dir = self.format_results(results, jsonfile_prefix)
if isinstance(result_files, dict):
for name in result_names:
print("Evaluating bboxes of {}".format(name))
ret_dict = self._evaluate_single(result_files[name])
metrics.update(ret_dict)
elif isinstance(result_files, str):
metrics.update(self._evaluate_single(result_files))
if tmp_dir is not None:
tmp_dir.cleanup()
return metrics
def output_to_nusc_box(detection):
"""Convert the output to the box class in the nuScenes.
Args:
detection (dict): Detection results.
- boxes_3d (:obj:`BaseInstance3DBoxes`): Detection bbox.
- scores_3d (torch.Tensor): Detection scores.
- labels_3d (torch.Tensor): Predicted box labels.
Returns:
list[:obj:`NuScenesBox`]: List of standard NuScenesBoxes.
"""
box3d = detection["boxes_3d"]
scores = detection["scores_3d"].numpy()
labels = detection["labels_3d"].numpy()
box_gravity_center = box3d.gravity_center.numpy()
box_dims = box3d.dims.numpy()
box_yaw = box3d.yaw.numpy()
# TODO: check whether this is necessary
# with dir_offset & dir_limit in the head
box_yaw = -box_yaw - np.pi / 2
box_list = []
for i in range(len(box3d)):
quat = pyquaternion.Quaternion(axis=[0, 0, 1], radians=box_yaw[i])
velocity = (*box3d.tensor[i, 7:9], 0.0)
# velo_val = np.linalg.norm(box3d[i, 7:9])
# velo_ori = box3d[i, 6]
# velocity = (
# velo_val * np.cos(velo_ori), velo_val * np.sin(velo_ori), 0.0)
box = NuScenesBox(
box_gravity_center[i],
box_dims[i],
quat,
label=labels[i],
score=scores[i],
velocity=velocity,
)
box_list.append(box)
return box_list
def lidar_nusc_box_to_global(
info, boxes, classes, eval_configs, eval_version="detection_cvpr_2019"
):
"""Convert the box from ego to global coordinate.
Args:
info (dict): Info for a specific sample data, including the
calibration information.
boxes (list[:obj:`NuScenesBox`]): List of predicted NuScenesBoxes.
classes (list[str]): Mapped classes in the evaluation.
eval_configs : Evaluation configuration object.
eval_version (str): Evaluation version.
Default: 'detection_cvpr_2019'
Returns:
list: List of standard NuScenesBoxes in the global
coordinate.
"""
box_list = []
for box in boxes:
# Move box to ego vehicle coord system
box.rotate(pyquaternion.Quaternion(info["lidar2ego_rotation"]))
box.translate(np.array(info["lidar2ego_translation"]))
# filter det in ego.
cls_range_map = eval_configs.class_range
radius = np.linalg.norm(box.center[:2], 2)
det_range = cls_range_map[classes[box.label]]
if radius > det_range:
continue
# Move box to global coord system
box.rotate(pyquaternion.Quaternion(info["ego2global_rotation"]))
box.translate(np.array(info["ego2global_translation"]))
box_list.append(box)
return box_list
| kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Sure, we can remove this in a future version with coordinate system reformatting. | kentang-mit | 2 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/models/fusion_models/bevfusion.py | from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization
from mmdet3d.models import FUSIONMODELS
from .base import Base3DFusionModel
__all__ = ["BEVFusion"]
@FUSIONMODELS.register_module()
class BEVFusion(Base3DFusionModel):
def __init__(
self,
encoders: Dict[str, Any],
fuser: Dict[str, Any],
decoder: Dict[str, Any],
heads: Dict[str, Any],
**kwargs,
) -> None:
super().__init__()
self.encoders = nn.ModuleDict()
if encoders.get("camera") is not None:
self.encoders["camera"] = nn.ModuleDict(
{
"backbone": build_backbone(encoders["camera"]["backbone"]),
"neck": build_neck(encoders["camera"]["neck"]),
"vtransform": build_vtransform(encoders["camera"]["vtransform"]),
}
)
if encoders.get("lidar") is not None:
self.encoders["lidar"] = nn.ModuleDict(
{
"voxelize": Voxelization(**encoders["lidar"]["voxelize"]),
"backbone": build_backbone(encoders["lidar"]["backbone"]),
}
)
self.voxelize_reduce = encoders["lidar"].get("voxelize_reduce", True)
if fuser is not None:
self.fuser = build_fuser(fuser)
else:
self.fuser = None
self.decoder = nn.ModuleDict(
{
"backbone": build_backbone(decoder["backbone"]),
"neck": build_neck(decoder["neck"]),
}
)
self.heads = nn.ModuleDict()
for name in heads:
if heads[name] is not None:
self.heads[name] = build_head(heads[name])
if "loss_scale" in kwargs:
self.loss_scale = kwargs["loss_scale"]
else:
self.loss_scale = dict()
for name in heads:
if heads[name] is not None:
self.loss_scale[name] = 1.0
self.init_weights()
def init_weights(self) -> None:
if "camera" in self.encoders:
self.encoders["camera"]["backbone"].init_weights()
def extract_camera_features(
self,
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
) -> torch.Tensor:
B, N, C, H, W = x.size()
x = x.view(B * N, C, H, W)
x = self.encoders["camera"]["backbone"](x)
x = self.encoders["camera"]["neck"](x)
if not isinstance(x, torch.Tensor):
x = x[0]
BN, C, H, W = x.size()
x = x.view(B, int(BN / B), C, H, W)
x = self.encoders["camera"]["vtransform"](
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
)
return x
def extract_lidar_features(self, x) -> torch.Tensor:
feats, coords, sizes = self.voxelize(x)
batch_size = coords[-1, 0] + 1
x = self.encoders["lidar"]["backbone"](feats, coords, batch_size, sizes=sizes)
return x
@torch.no_grad()
@force_fp32()
def voxelize(self, points):
feats, coords, sizes = [], [], []
for k, res in enumerate(points):
f, c, n = self.encoders["lidar"]["voxelize"](res)
feats.append(f)
coords.append(F.pad(c, (1, 0), mode="constant", value=k))
sizes.append(n)
feats = torch.cat(feats, dim=0)
coords = torch.cat(coords, dim=0)
sizes = torch.cat(sizes, dim=0)
if self.voxelize_reduce:
feats = feats.sum(dim=1, keepdim=False) / sizes.type_as(feats).view(-1, 1)
feats = feats.contiguous()
return feats, coords, sizes
@auto_fp16(apply_to=("img", "points"))
def forward(
self,
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev=None,
gt_bboxes_3d=None,
gt_labels_3d=None,
**kwargs,
):
features = []
for sensor in self.encoders:
if sensor == "camera":
feature = self.extract_camera_features(
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
metas,
)
elif sensor == "lidar":
feature = self.extract_lidar_features(points)
else:
raise ValueError(f"unsupported sensor: {sensor}")
features.append(feature)
if self.fuser is not None:
x = self.fuser(features)
else:
assert len(features) == 1, features
x = features[0]
batch_size = x.shape[0]
x = self.decoder["backbone"](x)
x = self.decoder["neck"](x)
if self.training:
outputs = {}
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
losses = head.loss(gt_bboxes_3d, gt_labels_3d, pred_dict)
elif type == "map":
losses = head(x, gt_masks_bev)
else:
raise ValueError(f"unsupported head: {type}")
for name, val in losses.items():
if val.requires_grad:
outputs[f"loss/{type}/{name}"] = val * self.loss_scale[type]
else:
outputs[f"stats/{type}/{name}"] = val
return outputs
else:
outputs = [{} for _ in range(batch_size)]
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
bboxes = head.get_bboxes(pred_dict, metas)
for k, (boxes, scores, labels) in enumerate(bboxes):
outputs[k].update(
{
"boxes_3d": boxes.to("cpu"),
"scores_3d": scores.cpu(),
"labels_3d": labels.cpu(),
}
)
elif type == "map":
logits = head(x)
for k in range(batch_size):
outputs[k].update(
{
"masks_bev": logits[k].cpu(),
"gt_masks_bev": gt_masks_bev[k].cpu(),
}
)
else:
raise ValueError(f"unsupported head: {type}")
return outputs
| from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization, DynamicScatter
from mmdet3d.models import FUSIONMODELS
from .base import Base3DFusionModel
__all__ = ["BEVFusion"]
@FUSIONMODELS.register_module()
class BEVFusion(Base3DFusionModel):
def __init__(
self,
encoders: Dict[str, Any],
fuser: Dict[str, Any],
decoder: Dict[str, Any],
heads: Dict[str, Any],
**kwargs,
) -> None:
super().__init__()
self.encoders = nn.ModuleDict()
if encoders.get("camera") is not None:
self.encoders["camera"] = nn.ModuleDict(
{
"backbone": build_backbone(encoders["camera"]["backbone"]),
"neck": build_neck(encoders["camera"]["neck"]),
"vtransform": build_vtransform(encoders["camera"]["vtransform"]),
}
)
if encoders.get("lidar") is not None:
if encoders["lidar"]["voxelize"].get("max_num_points", -1) > 0:
voxelize_module = Voxelization(**encoders["lidar"]["voxelize"])
else:
voxelize_module = DynamicScatter(**encoders["lidar"]["voxelize"])
self.encoders["lidar"] = nn.ModuleDict(
{
"voxelize": voxelize_module,
"backbone": build_backbone(encoders["lidar"]["backbone"]),
}
)
self.voxelize_reduce = encoders["lidar"].get("voxelize_reduce", True)
if fuser is not None:
self.fuser = build_fuser(fuser)
else:
self.fuser = None
self.decoder = nn.ModuleDict(
{
"backbone": build_backbone(decoder["backbone"]),
"neck": build_neck(decoder["neck"]),
}
)
self.heads = nn.ModuleDict()
for name in heads:
if heads[name] is not None:
self.heads[name] = build_head(heads[name])
if "loss_scale" in kwargs:
self.loss_scale = kwargs["loss_scale"]
else:
self.loss_scale = dict()
for name in heads:
if heads[name] is not None:
self.loss_scale[name] = 1.0
self.init_weights()
def init_weights(self) -> None:
if "camera" in self.encoders:
self.encoders["camera"]["backbone"].init_weights()
def extract_camera_features(
self,
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
) -> torch.Tensor:
B, N, C, H, W = x.size()
x = x.view(B * N, C, H, W)
x = self.encoders["camera"]["backbone"](x)
x = self.encoders["camera"]["neck"](x)
if not isinstance(x, torch.Tensor):
x = x[0]
BN, C, H, W = x.size()
x = x.view(B, int(BN / B), C, H, W)
x = self.encoders["camera"]["vtransform"](
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
)
return x
def extract_lidar_features(self, x) -> torch.Tensor:
feats, coords, sizes = self.voxelize(x)
batch_size = coords[-1, 0] + 1
x = self.encoders["lidar"]["backbone"](feats, coords, batch_size, sizes=sizes)
return x
@torch.no_grad()
@force_fp32()
def voxelize(self, points):
feats, coords, sizes = [], [], []
for k, res in enumerate(points):
ret = self.encoders["lidar"]["voxelize"](res)
if len(ret) == 3:
# hard voxelize
f, c, n = ret
else:
assert len(ret) == 2
f, c = ret
n = None
feats.append(f)
coords.append(F.pad(c, (1, 0), mode="constant", value=k))
if n is not None:
sizes.append(n)
feats = torch.cat(feats, dim=0)
coords = torch.cat(coords, dim=0)
if len(sizes) > 0:
sizes = torch.cat(sizes, dim=0)
if self.voxelize_reduce:
feats = feats.sum(dim=1, keepdim=False) / sizes.type_as(feats).view(
-1, 1
)
feats = feats.contiguous()
return feats, coords, sizes
@auto_fp16(apply_to=("img", "points"))
def forward(
self,
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev=None,
gt_bboxes_3d=None,
gt_labels_3d=None,
**kwargs,
):
if isinstance(img, list):
raise NotImplementedError
else:
outputs = self.forward_single(
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev,
gt_bboxes_3d,
gt_labels_3d,
**kwargs,
)
return outputs
@auto_fp16(apply_to=("img", "points"))
def forward_single(
self,
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev=None,
gt_bboxes_3d=None,
gt_labels_3d=None,
**kwargs,
):
features = []
for sensor in (
self.encoders if self.training else list(self.encoders.keys())[::-1]
):
if sensor == "camera":
feature = self.extract_camera_features(
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
)
elif sensor == "lidar":
feature = self.extract_lidar_features(points)
else:
raise ValueError(f"unsupported sensor: {sensor}")
features.append(feature)
if not self.training:
# avoid OOM
features = features[::-1]
if self.fuser is not None:
x = self.fuser(features)
else:
assert len(features) == 1, features
x = features[0]
batch_size = x.shape[0]
x = self.decoder["backbone"](x)
x = self.decoder["neck"](x)
if self.training:
outputs = {}
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
losses = head.loss(gt_bboxes_3d, gt_labels_3d, pred_dict)
elif type == "map":
losses = head(x, gt_masks_bev)
else:
raise ValueError(f"unsupported head: {type}")
for name, val in losses.items():
if val.requires_grad:
outputs[f"loss/{type}/{name}"] = val * self.loss_scale[type]
else:
outputs[f"stats/{type}/{name}"] = val
return outputs
else:
outputs = [{} for _ in range(batch_size)]
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
bboxes = head.get_bboxes(pred_dict, metas)
for k, (boxes, scores, labels) in enumerate(bboxes):
outputs[k].update(
{
"boxes_3d": boxes.to("cpu"),
"scores_3d": scores.cpu(),
"labels_3d": labels.cpu(),
}
)
elif type == "map":
logits = head(x)
for k in range(batch_size):
outputs[k].update(
{
"masks_bev": logits[k].cpu(),
"gt_masks_bev": gt_masks_bev[k].cpu(),
}
)
else:
raise ValueError(f"unsupported head: {type}")
return outputs
| kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Remove the commented code. | zhijian-liu | 3 |
mit-han-lab/bevfusion | 150 | Add training details | In this PR, we add training details for the following model:
- camera-only detection
- camera-only BEV map segmentation
- LiDAR-only detection
- LiDAR-only BEV map segmentation
The results are also slightly improved compared with our previous release. | null | 2022-09-26 15:30:22+00:00 | 2022-09-26 22:24:39+00:00 | mmdet3d/models/fusion_models/bevfusion.py | from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization
from mmdet3d.models import FUSIONMODELS
from .base import Base3DFusionModel
__all__ = ["BEVFusion"]
@FUSIONMODELS.register_module()
class BEVFusion(Base3DFusionModel):
def __init__(
self,
encoders: Dict[str, Any],
fuser: Dict[str, Any],
decoder: Dict[str, Any],
heads: Dict[str, Any],
**kwargs,
) -> None:
super().__init__()
self.encoders = nn.ModuleDict()
if encoders.get("camera") is not None:
self.encoders["camera"] = nn.ModuleDict(
{
"backbone": build_backbone(encoders["camera"]["backbone"]),
"neck": build_neck(encoders["camera"]["neck"]),
"vtransform": build_vtransform(encoders["camera"]["vtransform"]),
}
)
if encoders.get("lidar") is not None:
self.encoders["lidar"] = nn.ModuleDict(
{
"voxelize": Voxelization(**encoders["lidar"]["voxelize"]),
"backbone": build_backbone(encoders["lidar"]["backbone"]),
}
)
self.voxelize_reduce = encoders["lidar"].get("voxelize_reduce", True)
if fuser is not None:
self.fuser = build_fuser(fuser)
else:
self.fuser = None
self.decoder = nn.ModuleDict(
{
"backbone": build_backbone(decoder["backbone"]),
"neck": build_neck(decoder["neck"]),
}
)
self.heads = nn.ModuleDict()
for name in heads:
if heads[name] is not None:
self.heads[name] = build_head(heads[name])
if "loss_scale" in kwargs:
self.loss_scale = kwargs["loss_scale"]
else:
self.loss_scale = dict()
for name in heads:
if heads[name] is not None:
self.loss_scale[name] = 1.0
self.init_weights()
def init_weights(self) -> None:
if "camera" in self.encoders:
self.encoders["camera"]["backbone"].init_weights()
def extract_camera_features(
self,
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
) -> torch.Tensor:
B, N, C, H, W = x.size()
x = x.view(B * N, C, H, W)
x = self.encoders["camera"]["backbone"](x)
x = self.encoders["camera"]["neck"](x)
if not isinstance(x, torch.Tensor):
x = x[0]
BN, C, H, W = x.size()
x = x.view(B, int(BN / B), C, H, W)
x = self.encoders["camera"]["vtransform"](
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
)
return x
def extract_lidar_features(self, x) -> torch.Tensor:
feats, coords, sizes = self.voxelize(x)
batch_size = coords[-1, 0] + 1
x = self.encoders["lidar"]["backbone"](feats, coords, batch_size, sizes=sizes)
return x
@torch.no_grad()
@force_fp32()
def voxelize(self, points):
feats, coords, sizes = [], [], []
for k, res in enumerate(points):
f, c, n = self.encoders["lidar"]["voxelize"](res)
feats.append(f)
coords.append(F.pad(c, (1, 0), mode="constant", value=k))
sizes.append(n)
feats = torch.cat(feats, dim=0)
coords = torch.cat(coords, dim=0)
sizes = torch.cat(sizes, dim=0)
if self.voxelize_reduce:
feats = feats.sum(dim=1, keepdim=False) / sizes.type_as(feats).view(-1, 1)
feats = feats.contiguous()
return feats, coords, sizes
@auto_fp16(apply_to=("img", "points"))
def forward(
self,
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev=None,
gt_bboxes_3d=None,
gt_labels_3d=None,
**kwargs,
):
features = []
for sensor in self.encoders:
if sensor == "camera":
feature = self.extract_camera_features(
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
img_aug_matrix,
lidar_aug_matrix,
metas,
)
elif sensor == "lidar":
feature = self.extract_lidar_features(points)
else:
raise ValueError(f"unsupported sensor: {sensor}")
features.append(feature)
if self.fuser is not None:
x = self.fuser(features)
else:
assert len(features) == 1, features
x = features[0]
batch_size = x.shape[0]
x = self.decoder["backbone"](x)
x = self.decoder["neck"](x)
if self.training:
outputs = {}
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
losses = head.loss(gt_bboxes_3d, gt_labels_3d, pred_dict)
elif type == "map":
losses = head(x, gt_masks_bev)
else:
raise ValueError(f"unsupported head: {type}")
for name, val in losses.items():
if val.requires_grad:
outputs[f"loss/{type}/{name}"] = val * self.loss_scale[type]
else:
outputs[f"stats/{type}/{name}"] = val
return outputs
else:
outputs = [{} for _ in range(batch_size)]
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
bboxes = head.get_bboxes(pred_dict, metas)
for k, (boxes, scores, labels) in enumerate(bboxes):
outputs[k].update(
{
"boxes_3d": boxes.to("cpu"),
"scores_3d": scores.cpu(),
"labels_3d": labels.cpu(),
}
)
elif type == "map":
logits = head(x)
for k in range(batch_size):
outputs[k].update(
{
"masks_bev": logits[k].cpu(),
"gt_masks_bev": gt_masks_bev[k].cpu(),
}
)
else:
raise ValueError(f"unsupported head: {type}")
return outputs
| from typing import Any, Dict
import torch
from mmcv.runner import auto_fp16, force_fp32
from torch import nn
from torch.nn import functional as F
from mmdet3d.models.builder import (
build_backbone,
build_fuser,
build_head,
build_neck,
build_vtransform,
)
from mmdet3d.ops import Voxelization, DynamicScatter
from mmdet3d.models import FUSIONMODELS
from .base import Base3DFusionModel
__all__ = ["BEVFusion"]
@FUSIONMODELS.register_module()
class BEVFusion(Base3DFusionModel):
def __init__(
self,
encoders: Dict[str, Any],
fuser: Dict[str, Any],
decoder: Dict[str, Any],
heads: Dict[str, Any],
**kwargs,
) -> None:
super().__init__()
self.encoders = nn.ModuleDict()
if encoders.get("camera") is not None:
self.encoders["camera"] = nn.ModuleDict(
{
"backbone": build_backbone(encoders["camera"]["backbone"]),
"neck": build_neck(encoders["camera"]["neck"]),
"vtransform": build_vtransform(encoders["camera"]["vtransform"]),
}
)
if encoders.get("lidar") is not None:
if encoders["lidar"]["voxelize"].get("max_num_points", -1) > 0:
voxelize_module = Voxelization(**encoders["lidar"]["voxelize"])
else:
voxelize_module = DynamicScatter(**encoders["lidar"]["voxelize"])
self.encoders["lidar"] = nn.ModuleDict(
{
"voxelize": voxelize_module,
"backbone": build_backbone(encoders["lidar"]["backbone"]),
}
)
self.voxelize_reduce = encoders["lidar"].get("voxelize_reduce", True)
if fuser is not None:
self.fuser = build_fuser(fuser)
else:
self.fuser = None
self.decoder = nn.ModuleDict(
{
"backbone": build_backbone(decoder["backbone"]),
"neck": build_neck(decoder["neck"]),
}
)
self.heads = nn.ModuleDict()
for name in heads:
if heads[name] is not None:
self.heads[name] = build_head(heads[name])
if "loss_scale" in kwargs:
self.loss_scale = kwargs["loss_scale"]
else:
self.loss_scale = dict()
for name in heads:
if heads[name] is not None:
self.loss_scale[name] = 1.0
self.init_weights()
def init_weights(self) -> None:
if "camera" in self.encoders:
self.encoders["camera"]["backbone"].init_weights()
def extract_camera_features(
self,
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
) -> torch.Tensor:
B, N, C, H, W = x.size()
x = x.view(B * N, C, H, W)
x = self.encoders["camera"]["backbone"](x)
x = self.encoders["camera"]["neck"](x)
if not isinstance(x, torch.Tensor):
x = x[0]
BN, C, H, W = x.size()
x = x.view(B, int(BN / B), C, H, W)
x = self.encoders["camera"]["vtransform"](
x,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
img_metas,
)
return x
def extract_lidar_features(self, x) -> torch.Tensor:
feats, coords, sizes = self.voxelize(x)
batch_size = coords[-1, 0] + 1
x = self.encoders["lidar"]["backbone"](feats, coords, batch_size, sizes=sizes)
return x
@torch.no_grad()
@force_fp32()
def voxelize(self, points):
feats, coords, sizes = [], [], []
for k, res in enumerate(points):
ret = self.encoders["lidar"]["voxelize"](res)
if len(ret) == 3:
# hard voxelize
f, c, n = ret
else:
assert len(ret) == 2
f, c = ret
n = None
feats.append(f)
coords.append(F.pad(c, (1, 0), mode="constant", value=k))
if n is not None:
sizes.append(n)
feats = torch.cat(feats, dim=0)
coords = torch.cat(coords, dim=0)
if len(sizes) > 0:
sizes = torch.cat(sizes, dim=0)
if self.voxelize_reduce:
feats = feats.sum(dim=1, keepdim=False) / sizes.type_as(feats).view(
-1, 1
)
feats = feats.contiguous()
return feats, coords, sizes
@auto_fp16(apply_to=("img", "points"))
def forward(
self,
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev=None,
gt_bboxes_3d=None,
gt_labels_3d=None,
**kwargs,
):
if isinstance(img, list):
raise NotImplementedError
else:
outputs = self.forward_single(
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev,
gt_bboxes_3d,
gt_labels_3d,
**kwargs,
)
return outputs
@auto_fp16(apply_to=("img", "points"))
def forward_single(
self,
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
gt_masks_bev=None,
gt_bboxes_3d=None,
gt_labels_3d=None,
**kwargs,
):
features = []
for sensor in (
self.encoders if self.training else list(self.encoders.keys())[::-1]
):
if sensor == "camera":
feature = self.extract_camera_features(
img,
points,
camera2ego,
lidar2ego,
lidar2camera,
lidar2image,
camera_intrinsics,
camera2lidar,
img_aug_matrix,
lidar_aug_matrix,
metas,
)
elif sensor == "lidar":
feature = self.extract_lidar_features(points)
else:
raise ValueError(f"unsupported sensor: {sensor}")
features.append(feature)
if not self.training:
# avoid OOM
features = features[::-1]
if self.fuser is not None:
x = self.fuser(features)
else:
assert len(features) == 1, features
x = features[0]
batch_size = x.shape[0]
x = self.decoder["backbone"](x)
x = self.decoder["neck"](x)
if self.training:
outputs = {}
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
losses = head.loss(gt_bboxes_3d, gt_labels_3d, pred_dict)
elif type == "map":
losses = head(x, gt_masks_bev)
else:
raise ValueError(f"unsupported head: {type}")
for name, val in losses.items():
if val.requires_grad:
outputs[f"loss/{type}/{name}"] = val * self.loss_scale[type]
else:
outputs[f"stats/{type}/{name}"] = val
return outputs
else:
outputs = [{} for _ in range(batch_size)]
for type, head in self.heads.items():
if type == "object":
pred_dict = head(x, metas)
bboxes = head.get_bboxes(pred_dict, metas)
for k, (boxes, scores, labels) in enumerate(bboxes):
outputs[k].update(
{
"boxes_3d": boxes.to("cpu"),
"scores_3d": scores.cpu(),
"labels_3d": labels.cpu(),
}
)
elif type == "map":
logits = head(x)
for k in range(batch_size):
outputs[k].update(
{
"masks_bev": logits[k].cpu(),
"gt_masks_bev": gt_masks_bev[k].cpu(),
}
)
else:
raise ValueError(f"unsupported head: {type}")
return outputs
| kentang-mit | e4d599edd51f758fdbf1f6a58732d31c6f8a56cc | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | Done. | kentang-mit | 4 |
mit-han-lab/bevfusion | 145 | Add docker support | This is related to [PR](https://github.com/mit-han-lab/bevfusion/pull/144) from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are trying to set up the environment in the host machines by themselves.
I am also plan to merge [PR](https://github.com/mit-han-lab/bevfusion/pull/144) after this one. | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
We also provide a [Dockerfile](docker/Dockerfile) to ease environment setup. To get started with docker, please make sure that `nvidia-docker` is installed on your machine. After that, please execute the following command to build the docker image:
```bash
cd docker && docker build . -t bevfusion
```
We can then run the docker with the following command:
```bash
nvidia-docker run -it -v `pwd`/../data:/dataset --shm-size 16g bevfusion /bin/bash
```
We recommend the users to run data preparation (instructions are available in the next section) outside the docker if possible. Note that the dataset directory should be an absolute path. Within the docker, please run the following command to clone our repo and install custom CUDA extensions:
```bash
cd home && git clone https://github.com/mit-han-lab/bevfusion && cd bevfusion
python setup.py develop
```
You can then create a symbolic link `data` to the `/dataset` directory in the docker.
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| kentang-mit | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | 2bf96604feab90edd18591a43bee1b9c41c26002 | Use `docker build -t` to assign a tag to the docker. | zhijian-liu | 5 |
mit-han-lab/bevfusion | 145 | Add docker support | This is related to [PR](https://github.com/mit-han-lab/bevfusion/pull/144) from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are trying to set up the environment in the host machines by themselves.
I am also plan to merge [PR](https://github.com/mit-han-lab/bevfusion/pull/144) after this one. | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
We also provide a [Dockerfile](docker/Dockerfile) to ease environment setup. To get started with docker, please make sure that `nvidia-docker` is installed on your machine. After that, please execute the following command to build the docker image:
```bash
cd docker && docker build . -t bevfusion
```
We can then run the docker with the following command:
```bash
nvidia-docker run -it -v `pwd`/../data:/dataset --shm-size 16g bevfusion /bin/bash
```
We recommend the users to run data preparation (instructions are available in the next section) outside the docker if possible. Note that the dataset directory should be an absolute path. Within the docker, please run the following command to clone our repo and install custom CUDA extensions:
```bash
cd home && git clone https://github.com/mit-han-lab/bevfusion && cd bevfusion
python setup.py develop
```
You can then create a symbolic link `data` to the `/dataset` directory in the docker.
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| kentang-mit | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | 2bf96604feab90edd18591a43bee1b9c41c26002 | ```suggestion
nvidia-docker run -it -v [dataset directory]:/dataset --shm-size 16g /bin/bash
``` | zhijian-liu | 6 |
mit-han-lab/bevfusion | 145 | Add docker support | This is related to [PR](https://github.com/mit-han-lab/bevfusion/pull/144) from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are trying to set up the environment in the host machines by themselves.
I am also plan to merge [PR](https://github.com/mit-han-lab/bevfusion/pull/144) after this one. | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
We also provide a [Dockerfile](docker/Dockerfile) to ease environment setup. To get started with docker, please make sure that `nvidia-docker` is installed on your machine. After that, please execute the following command to build the docker image:
```bash
cd docker && docker build . -t bevfusion
```
We can then run the docker with the following command:
```bash
nvidia-docker run -it -v `pwd`/../data:/dataset --shm-size 16g bevfusion /bin/bash
```
We recommend the users to run data preparation (instructions are available in the next section) outside the docker if possible. Note that the dataset directory should be an absolute path. Within the docker, please run the following command to clone our repo and install custom CUDA extensions:
```bash
cd home && git clone https://github.com/mit-han-lab/bevfusion && cd bevfusion
python setup.py develop
```
You can then create a symbolic link `data` to the `/dataset` directory in the docker.
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| kentang-mit | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | 2bf96604feab90edd18591a43bee1b9c41c26002 | Fixed. | kentang-mit | 7 |
mit-han-lab/bevfusion | 145 | Add docker support | This is related to [PR](https://github.com/mit-han-lab/bevfusion/pull/144) from @bentherien.
We provide an alternative to let the users build the docker image by themselves. The required libraries and their versions are clearly listed in `docker/Dockerfile`. Hopefully this will also be helpful for people who are trying to set up the environment in the host machines by themselves.
I am also plan to merge [PR](https://github.com/mit-han-lab/bevfusion/pull/144) after this one. | null | 2022-09-24 01:32:27+00:00 | 2022-09-26 22:51:16+00:00 | README.md | # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| # BEVFusion
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-object-detection-on-nuscenes)](https://paperswithcode.com/sota/3d-object-detection-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/bevfusion-multi-task-multi-sensor-fusion-with/3d-multi-object-tracking-on-nuscenes)](https://paperswithcode.com/sota/3d-multi-object-tracking-on-nuscenes?p=bevfusion-multi-task-multi-sensor-fusion-with)
### [website](http://bevfusion.mit.edu/) | [paper](https://arxiv.org/abs/2205.13542) | [video](https://www.youtube.com/watch?v=uCAka90si9E)
![demo](assets/demo.gif)
## News
**If you are interested in getting updates, please sign up [here](https://docs.google.com/forms/d/e/1FAIpQLSfkmfsX45HstL5rUQlS7xJthhS3Z_Pm2NOVstlXUqgaK4DEfQ/viewform) to get notified!**
- **(2022/8/16)** BEVFusion ranks first on [Waymo](https://waymo.com/open/challenges/2020/3d-detection/) 3D object detection leaderboard among all solutions.
- **(2022/6/3)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions.
- **(2022/6/3)** We released the first version of BEVFusion (with pre-trained checkpoints and evaluation).
- **(2022/5/26)** BEVFusion is released on [arXiv](https://arxiv.org/abs/2205.13542).
- **(2022/5/2)** BEVFusion ranks first on [nuScenes](https://nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any) among all solutions that do not use test-time augmentation and model ensemble.
## Abstract
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic multi-task multi-sensor fusion framework. It unifies multi-modal features in the shared bird's-eye view (BEV) representation space, which nicely preserves both geometric and semantic information. To achieve this, we diagnose and lift key efficiency bottlenecks in the view transformation with optimized BEV pooling, reducing latency by more than **40x**. BEVFusion is fundamentally task-agnostic and seamlessly supports different 3D perception tasks with almost no architectural changes. It establishes the new state of the art on the nuScenes benchmark, achieving **1.3%** higher mAP and NDS on 3D object detection and **13.6%** higher mIoU on BEV map segmentation, with **1.9x** lower computation cost.
## Results
### 3D Object Detection (on nuScenes test)
| Model | Modality | mAP | NDS |
| :-------: | :------: | :--: | :--: |
| BEVFusion-e | C+L | 74.99 | 76.09 |
| BEVFusion | C+L | 70.23 | 72.88 |
### 3D Object Detection (on nuScenes validation)
| Model | Modality | mAP | NDS | Checkpoint |
| :------------------: | :------: | :--: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml) | C+L | 68.52 | 71.38 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-det.pth) |
| [Camera-Only Baseline](configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml) | C | 35.56 | 41.21 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-det.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml) | L | 64.68 | 69.28 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-det.pth) |
*Note*: The camera-only object detection baseline is a variant of BEVDet-Tiny with a much heavier view transformer and other differences in hyperparameters. Thanks to our [efficient BEV pooling](mmdet3d/ops/bev_pool) operator, this model runs fast and has higher mAP than BEVDet-Tiny under the same input resolution. Please refer to [BEVDet repo](https://github.com/HuangJunjie2017/BEVDet) for the original BEVDet-Tiny implementation. The LiDAR-only baseline is TransFusion-L.
### BEV Map Segmentation (on nuScenes validation)
| Model | Modality | mIoU | Checkpoint |
| :------------------: | :------: | :--: | :---------: |
| [BEVFusion](configs/nuscenes/seg/fusion-bev256d2-lss.yaml) | C+L | 62.95 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/bevfusion-seg.pth) |
| [Camera-Only Baseline](configs/nuscenes/seg/camera-bev256d2.yaml) | C | 57.09 | [Link](https://bevfusion.mit.edu/files/pretrained_updated/camera-only-seg.pth) |
| [LiDAR-Only Baseline](configs/nuscenes/seg/lidar-centerpoint-bev128.yaml) | L | 48.56 | [Link](https://bevfusion.mit.edu/files/pretrained/lidar-only-seg.pth) |
## Usage
### Prerequisites
The code is built with following libraries:
- Python >= 3.8, \<3.9
- OpenMPI = 4.0.4 and mpi4py = 3.0.3 (Needed for torchpack)
- Pillow = 8.4.0 (see [here](https://github.com/mit-han-lab/bevfusion/issues/63))
- [PyTorch](https://github.com/pytorch/pytorch) >= 1.9, \<= 1.10.2
- [tqdm](https://github.com/tqdm/tqdm)
- [torchpack](https://github.com/mit-han-lab/torchpack)
- [mmcv](https://github.com/open-mmlab/mmcv) = 1.4.0
- [mmdetection](http://github.com/open-mmlab/mmdetection) = 2.20.0
- [nuscenes-dev-kit](https://github.com/nutonomy/nuscenes-devkit)
After installing these dependencies, please run this command to install the codebase:
```bash
python setup.py develop
```
We also provide a [Dockerfile](docker/Dockerfile) to ease environment setup. To get started with docker, please make sure that `nvidia-docker` is installed on your machine. After that, please execute the following command to build the docker image:
```bash
cd docker && docker build . -t bevfusion
```
We can then run the docker with the following command:
```bash
nvidia-docker run -it -v `pwd`/../data:/dataset --shm-size 16g bevfusion /bin/bash
```
We recommend the users to run data preparation (instructions are available in the next section) outside the docker if possible. Note that the dataset directory should be an absolute path. Within the docker, please run the following command to clone our repo and install custom CUDA extensions:
```bash
cd home && git clone https://github.com/mit-han-lab/bevfusion && cd bevfusion
python setup.py develop
```
You can then create a symbolic link `data` to the `/dataset` directory in the docker.
### Data Preparation
#### nuScenes
Please follow the instructions from [here](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/datasets/nuscenes_det.md) to download and preprocess the nuScenes dataset. Please remember to download both detection dataset and the map extension (for BEV map segmentation). After data preparation, you will be able to see the following directory structure (as is indicated in mmdetection3d):
```
mmdetection3d
├── mmdet3d
├── tools
├── configs
├── data
│ ├── nuscenes
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
│ │ ├── nuscenes_database
│ │ ├── nuscenes_infos_train.pkl
│ │ ├── nuscenes_infos_val.pkl
│ │ ├── nuscenes_infos_test.pkl
│ │ ├── nuscenes_dbinfos_train.pkl
```
### Evaluation
We also provide instructions for evaluating our pretrained models. Please download the checkpoints using the following script:
```bash
./tools/download_pretrained.sh
```
Then, you will be able to run:
```bash
torchpack dist-run -np 8 python tools/test.py [config file path] pretrained/[checkpoint name].pth --eval [evaluation type]
```
For example, if you want to evaluate the detection variant of BEVFusion, you can try:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml pretrained/bevfusion-det.pth --eval bbox
```
While for the segmentation variant of BEVFusion, this command will be helpful:
```bash
torchpack dist-run -np 8 python tools/test.py configs/nuscenes/seg/fusion-bev256d2-lss.yaml pretrained/bevfusion-seg.pth --eval map
```
### Training
We provide instructions to reproduce our results on nuScenes.
For example, if you want to train the camera-only variant for object detection, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/centerhead/lssfpn/camera/256x704/swint/default.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For camera-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/camera-bev256d2.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth
```
For LiDAR-only detector, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/voxelnet_0p075.yaml
```
For LiDAR-only BEV segmentation model, please run:
```bash
torchpack dist-run -np 8 python tools/train.py configs/nuscenes/seg/lidar-centerpoint-bev128.yaml
```
## FAQs
Q: Can we directly use the info files prepared by mmdetection3d?
A: We recommend re-generating the info files using this codebase since we forked mmdetection3d before their [coordinate system refactoring](https://github.com/open-mmlab/mmdetection3d/blob/master/docs/en/changelog.md).
## Acknowledgements
BEVFusion is based on [mmdetection3d](https://github.com/open-mmlab/mmdetection3d). It is also greatly inspired by the following outstanding contributions to the open-source community: [LSS](https://github.com/nv-tlabs/lift-splat-shoot), [BEVDet](https://github.com/HuangJunjie2017/BEVDet), [TransFusion](https://github.com/XuyangBai/TransFusion), [CenterPoint](https://github.com/tianweiy/CenterPoint), [MVP](https://github.com/tianweiy/MVP), [FUTR3D](https://arxiv.org/abs/2203.10642), [CVT](https://github.com/bradyz/cross_view_transformers) and [DETR3D](https://github.com/WangYueFt/detr3d).
Please also check out related papers in the camera-only 3D perception community such as [BEVDet4D](https://arxiv.org/abs/2203.17054), [BEVerse](https://arxiv.org/abs/2205.09743), [BEVFormer](https://arxiv.org/abs/2203.17270), [M2BEV](https://arxiv.org/abs/2204.05088), [PETR](https://arxiv.org/abs/2203.05625) and [PETRv2](https://arxiv.org/abs/2206.01256), which might be interesting future extensions to BEVFusion.
## Citation
If BEVFusion is useful or relevant to your research, please kindly recognize our contributions by citing our paper:
```bibtex
@article{liu2022bevfusion,
title={BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation},
author={Liu, Zhijian and Tang, Haotian and Amini, Alexander and Yang, Xingyu and Mao, Huizi and Rus, Daniela and Han, Song},
journal={arXiv},
year={2022}
}
```
| kentang-mit | f39a4a0752fabc1eb81011b0433af69a6e9ff58c | 2bf96604feab90edd18591a43bee1b9c41c26002 | Fixed | kentang-mit | 8 |
python-eel/Eel | 577 | Added type stubs | I've added type stubs covering the vast majority of the Python side of the eel API.
I've chosen to use the more legacy List, Dict, etc. from the typing package rather than paramterized builtins to maximise support for earlier python versions (e.g. Python 3.6).
There were some occasions where the type annotation is probably too generous, but where it was either impractical to type precisely (e.g. *_ws*, and in the browser files *get_path* seem to be inconsistent) or not readily apparent what is appropriate. Apart from *get_path()* which is clear on a module-by-module basis, I've annotated these with typing.Any for now, but of course it might be good to narrow them further going forward.
Tested and seem to work for me with pyright in vscode. Can't imagine any issues due to the relative simplicity of this, but I've not tested the stubs with other typecheckers. | null | 2022-03-21 11:44:46+00:00 | 2023-02-13 14:55:22+00:00 | .github/workflows/test.yml | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -- --durations=0 --timeout=30
| name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -- --durations=0 --timeout=30
typecheck:
runs-on: windows-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: "3.10"
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -e typecheck
| thatfloflo | 505176162e0bc339be843e6e9a5d205fa20c0837 | cbd70642de70821b51a2304559818954c6a2c357 | Personal preference but if these could stick to running on ubuntu that'd be grand :) | samuelhwilliams | 0 |
python-eel/Eel | 577 | Added type stubs | I've added type stubs covering the vast majority of the Python side of the eel API.
I've chosen to use the more legacy List, Dict, etc. from the typing package rather than paramterized builtins to maximise support for earlier python versions (e.g. Python 3.6).
There were some occasions where the type annotation is probably too generous, but where it was either impractical to type precisely (e.g. *_ws*, and in the browser files *get_path* seem to be inconsistent) or not readily apparent what is appropriate. Apart from *get_path()* which is clear on a module-by-module basis, I've annotated these with typing.Any for now, but of course it might be good to narrow them further going forward.
Tested and seem to work for me with pyright in vscode. Can't imagine any issues due to the relative simplicity of this, but I've not tested the stubs with other typecheckers. | null | 2022-03-21 11:44:46+00:00 | 2023-02-13 14:55:22+00:00 | .github/workflows/test.yml | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -- --durations=0 --timeout=30
| name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -- --durations=0 --timeout=30
typecheck:
runs-on: windows-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: "3.10"
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -e typecheck
| thatfloflo | 505176162e0bc339be843e6e9a5d205fa20c0837 | cbd70642de70821b51a2304559818954c6a2c357 | The issue I had when testing this on my fork was that it threw up type errors with chrome.py as mypy cannot properly resolve the references for the winreg module. That's why I changed it to windows-latest. Ubuntu is quicker on GitHub Actions, and perhaps preferable for many other reasons, too, but we'd need to find a solution for the code in chrome.py:_find_chrome_win() first (unless we want to just `#type: ignore` that whole function).. | thatfloflo | 1 |
python-eel/Eel | 577 | Added type stubs | I've added type stubs covering the vast majority of the Python side of the eel API.
I've chosen to use the more legacy List, Dict, etc. from the typing package rather than paramterized builtins to maximise support for earlier python versions (e.g. Python 3.6).
There were some occasions where the type annotation is probably too generous, but where it was either impractical to type precisely (e.g. *_ws*, and in the browser files *get_path* seem to be inconsistent) or not readily apparent what is appropriate. Apart from *get_path()* which is clear on a module-by-module basis, I've annotated these with typing.Any for now, but of course it might be good to narrow them further going forward.
Tested and seem to work for me with pyright in vscode. Can't imagine any issues due to the relative simplicity of this, but I've not tested the stubs with other typecheckers. | null | 2022-03-21 11:44:46+00:00 | 2023-02-13 14:55:22+00:00 | .github/workflows/test.yml | name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -- --durations=0 --timeout=30
| name: Test Eel
on:
push:
branches: [ master ]
pull_request:
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-20.04, windows-latest, macos-latest]
python-version: [3.6, 3.7, 3.8, 3.9, "3.10"]
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -- --durations=0 --timeout=30
typecheck:
runs-on: windows-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: "3.10"
- name: Setup test execution environment.
run: pip3 install -r requirements-meta.txt
- name: Run tox tests
run: tox -e typecheck
| thatfloflo | 505176162e0bc339be843e6e9a5d205fa20c0837 | cbd70642de70821b51a2304559818954c6a2c357 | Ah, hmm, ok 🤔 I will have a bit of a think, but let's keep this for now then. | samuelhwilliams | 2 |
LibreTranslate/LibreTranslate | 503 | Add option to update models rather than reinstall | This will change the behavior of `--update-models` to only update models if there's a newer version available (or install any missing models)
Adds the option `--install-models` to force (re) installation of all models
Also adds `--update` to `scripts/install_models.py` to only update models if there's a newer version available (or install any missing models)
Will solve #502 | null | 2023-09-29 20:47:34+00:00 | 2023-09-30 01:34:44+00:00 | libretranslate/init.py |
from argostranslate import package, translate
import libretranslate.language
def boot(load_only=None, update_models=False):
try:
check_and_install_models(force=update_models, load_only_lang_codes=load_only)
except Exception as e:
print("Cannot update models (normal if you're offline): %s" % str(e))
def check_and_install_models(force=False, load_only_lang_codes=None):
if len(package.get_installed_packages()) < 2 or force:
# Update package definitions from remote
print("Updating language models")
package.update_package_index()
# Load available packages from local package index
available_packages = package.get_available_packages()
print("Found %s models" % len(available_packages))
if load_only_lang_codes is not None:
# load_only_lang_codes: List[str] (codes)
# Ensure the user does not use any unavailable language code.
unavailable_lang_codes = set(load_only_lang_codes)
for pack in available_packages:
unavailable_lang_codes -= {pack.from_code, pack.to_code}
if unavailable_lang_codes:
raise ValueError(
"Unavailable language codes: %s."
% ",".join(sorted(unavailable_lang_codes))
)
# Keep only the packages that have both from_code and to_code in our list.
available_packages = [
pack
for pack in available_packages
if pack.from_code in load_only_lang_codes and pack.to_code in load_only_lang_codes
]
if not available_packages:
raise ValueError("no available package")
print("Keep %s models" % len(available_packages))
# Download and install all available packages
for available_package in available_packages:
print(
f"Downloading {available_package} ({available_package.package_version}) ..."
)
available_package.install()
# reload installed languages
libretranslate.language.languages = translate.get_installed_languages()
print(
f"Loaded support for {len(translate.get_installed_languages())} languages ({len(available_packages)} models total)!"
)
|
from argostranslate import package, translate
from packaging import version
import libretranslate.language
def boot(load_only=None, update_models=False, install_models=False):
try:
if update_models:
check_and_install_models(load_only_lang_codes=load_only, update=update_models)
else:
check_and_install_models(force=install_models, load_only_lang_codes=load_only)
except Exception as e:
print("Cannot update models (normal if you're offline): %s" % str(e))
def check_and_install_models(force=False, load_only_lang_codes=None,update=False):
if len(package.get_installed_packages()) < 2 or force or update:
# Update package definitions from remote
print("Updating language models")
package.update_package_index()
# Load available packages from local package index
available_packages = package.get_available_packages()
installed_packages = package.get_installed_packages()
print("Found %s models" % len(available_packages))
if load_only_lang_codes is not None:
# load_only_lang_codes: List[str] (codes)
# Ensure the user does not use any unavailable language code.
unavailable_lang_codes = set(load_only_lang_codes)
for pack in available_packages:
unavailable_lang_codes -= {pack.from_code, pack.to_code}
if unavailable_lang_codes:
raise ValueError(
"Unavailable language codes: %s."
% ",".join(sorted(unavailable_lang_codes))
)
# Keep only the packages that have both from_code and to_code in our list.
available_packages = [
pack
for pack in available_packages
if pack.from_code in load_only_lang_codes and pack.to_code in load_only_lang_codes
]
if not available_packages:
raise ValueError("no available package")
print("Keep %s models" % len(available_packages))
# Download and install all available packages
for available_package in available_packages:
update = False
if not force:
for pack in installed_packages:
if (
pack.from_code == available_package.from_code
and pack.to_code == available_package.to_code
):
update = True
if version.parse(pack.package_version) < version.parse(available_package.package_version):
print(
f"Updating {available_package} ({pack.package_version}->{available_package.package_version}) ..."
)
pack.update()
if not update:
print(
f"Downloading {available_package} ({available_package.package_version}) ..."
)
available_package.install()
# reload installed languages
libretranslate.language.languages = translate.get_installed_languages()
print(
f"Loaded support for {len(translate.get_installed_languages())} languages ({len(available_packages)} models total)!"
)
| rrgeorge | ce25eec7741bc61ccbd17580497d61de238dc542 | 33f12a8ebbc309aaed834d081836352064f4b8a8 | Semantic versioning cannot be compared by string comparison:
e.g. "12.0" < "2.9" --> True (should be False) | pierotofy | 0 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
detectedLangText: "",
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
this.detectedLangText = "";
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
if(self.sourceLang == "auto" && res.detectedLanguage.length > 0){
self.detectedLangText = res.detectedLanguage.language+" ("+res.detectedLanguage.confidence+"%)";
}
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | there is a problem here, detectedLanguage is an object we cannot use .length because it returns undefined on an object
so the condition never triggers | dingedi | 1 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
detectedLangText: "",
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
this.detectedLangText = "";
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
if(self.sourceLang == "auto" && res.detectedLanguage.length > 0){
self.detectedLangText = res.detectedLanguage.language+" ("+res.detectedLanguage.confidence+"%)";
}
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | ```js
if(self.sourceLang == "auto" && res.detectedLanguage !== undefined){
```
this solve the problem | dingedi | 2 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
detectedLangText: "",
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
this.detectedLangText = "";
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
if(self.sourceLang == "auto" && res.detectedLanguage.length > 0){
self.detectedLangText = res.detectedLanguage.language+" ("+res.detectedLanguage.confidence+"%)";
}
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | Ah, good catch @dingedi . I should have reviewed more thoroughly. Can you make a commit into main? | pierotofy | 3 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
detectedLangText: "",
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
this.detectedLangText = "";
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
if(self.sourceLang == "auto" && res.detectedLanguage.length > 0){
self.detectedLangText = res.detectedLanguage.language+" ("+res.detectedLanguage.confidence+"%)";
}
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | yes done in https://github.com/LibreTranslate/LibreTranslate/commit/5d8e513d45e4764189b3b7516e5b9f29e6a6f38e | dingedi | 4 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/static/js/app.js | // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
var htmlRegex = /<(.*)>.*?|<(.*)\/>/;
document.addEventListener('DOMContentLoaded', function(){
var sidenavElems = document.querySelectorAll('.sidenav');
var sidenavInstances = M.Sidenav.init(sidenavElems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
inputTextareaHeight: 250,
savedTanslatedText: "",
translatedText: "",
output: "",
charactersLimit: -1,
detectedLangText: "",
copyTextLabel: "Copy text",
suggestions: false,
isSuggesting: false,
supportedFilesFormat : [],
translationType: "text",
inputFile: false,
loadingFileTranslation: false,
translatedFileUrl: false,
filesTranslation: true,
frontendTimeout: 500
},
mounted: function() {
const self = this;
const settingsRequest = new XMLHttpRequest();
settingsRequest.open("GET", BaseUrl + "/frontend/settings", true);
const langsRequest = new XMLHttpRequest();
langsRequest.open("GET", BaseUrl + "/languages", true);
settingsRequest.onload = function() {
if (this.status >= 200 && this.status < 400) {
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
self.suggestions = self.settings.suggestions;
self.supportedFilesFormat = self.settings.supportedFilesFormat;
self.filesTranslation = self.settings.filesTranslation;
self.frontendTimeout = self.settings.frontendTimeout;
if (langsRequest.response) {
handleLangsResponse(self, langsRequest);
} else {
langsRequest.onload = function() {
handleLangsResponse(self, this);
}
}
} else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
settingsRequest.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
langsRequest.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
settingsRequest.send();
langsRequest.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.$refs.inputTextarea){
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = this.inputTextareaHeight + "px";
this.$refs.translatedTextarea.style.height = this.inputTextareaHeight + "px";
} else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.inputTextarea.scrollHeight + 32) + "px";
this.$refs.translatedTextarea.style.height = Math.max(this.inputTextareaHeight, this.$refs.translatedTextarea.scrollHeight + 32) + "px";
}
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
// Update "selected" attribute (to overcome a vue.js limitation)
// but properly display checkmarks on supported browsers.
// Also change the <select> width value depending on the <option> length
if (this.$refs.sourceLangDropdown) {
updateSelectedAttribute(this.$refs.sourceLangDropdown, this.sourceLang);
}
if (this.$refs.targetLangDropdown) {
updateSelectedAttribute(this.$refs.targetLangDropdown, this.targetLang);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: ' + this.$options.filters.escape(this.inputText) + ',',
' source: ' + this.$options.filters.escape(this.sourceLang) + ',',
' target: ' + this.$options.filters.escape(this.targetLang) + ',',
' format: "' + (this.isHtml ? "html" : "text") + '",',
' api_key: "' + (localStorage.getItem("api_key") || "") + '"',
' }),',
' headers: { "Content-Type": "application/json" }',
'});',
'',
'console.log(await res.json());'].join("\n");
},
supportedFilesFormatFormatted: function() {
return this.supportedFilesFormat.join(', ');
},
isHtml: function(){
return htmlRegex.test(this.inputText);
},
canSendSuggestion() {
return this.translatedText.trim() !== "" && this.translatedText !== this.savedTanslatedText;
}
},
filters: {
escape: function(v){
return JSON.stringify(v);
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(e){
this.closeSuggestTranslation(e)
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
getQueryParam: function (key) {
const params = new URLSearchParams(window.location.search);
return params.get(key)
},
updateQueryParam: function (key, value) {
let searchParams = new URLSearchParams(window.location.search)
searchParams.set(key, value);
let newRelativePathQuery = window.location.pathname + '?' + searchParams.toString();
history.pushState(null, '', newRelativePathQuery);
},
handleInput: function(e){
this.closeSuggestTranslation(e)
this.updateQueryParam('source', this.sourceLang)
this.updateQueryParam('target', this.targetLang)
this.updateQueryParam('q', encodeURI(this.inputText))
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
this.detectedLangText = "";
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("format", self.isHtml ? "html" : "text");
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
if(self.sourceLang == "auto" && res.detectedLanguage.length > 0){
self.detectedLangText = res.detectedLanguage.language+" ("+res.detectedLanguage.confidence+"%)";
}
} else{
throw new Error(res.error || "Unknown error");
}
} catch (e) {
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, self.frontendTimeout);
},
copyText: function(e){
e.preventDefault();
this.$refs.translatedTextarea.select();
this.$refs.translatedTextarea.setSelectionRange(0, 9999999); /* For mobile devices */
document.execCommand("copy");
if (this.copyTextLabel === "Copy text"){
this.copyTextLabel = "Copied";
var self = this;
setTimeout(function(){
self.copyTextLabel = "Copy text";
}, 1500);
}
},
suggestTranslation: function(e) {
e.preventDefault();
this.savedTanslatedText = this.translatedText
this.isSuggesting = true;
},
closeSuggestTranslation: function(e) {
this.translatedText = this.savedTanslatedText
e.preventDefault();
this.isSuggesting = false;
},
sendSuggestion: function(e) {
e.preventDefault();
var self = this;
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("s", self.translatedText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
request.open('POST', BaseUrl + '/suggest', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
if (res.success){
M.toast({html: 'Thanks for your correction.'})
self.closeSuggestTranslation(e)
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.closeSuggestTranslation(e)
}
};
request.onerror = function() {
self.error = "Error while calling /suggest";
self.loadingTranslation = false;
};
request.send(data);
},
deleteText: function(e){
e.preventDefault();
this.inputText = this.translatedText = this.output = "";
this.$refs.inputTextarea.focus();
},
switchType: function(type) {
this.translationType = type;
},
handleInputFile: function(e) {
this.inputFile = e.target.files[0];
},
removeFile: function(e) {
e.preventDefault()
this.inputFile = false;
this.translatedFileUrl = false;
this.loadingFileTranslation = false;
},
translateFile: function(e) {
e.preventDefault();
let self = this;
let translateFileRequest = new XMLHttpRequest();
translateFileRequest.open("POST", BaseUrl + "/translate_file", true);
let data = new FormData();
data.append("file", this.inputFile);
data.append("source", this.sourceLang);
data.append("target", this.targetLang);
data.append("api_key", localStorage.getItem("api_key") || "");
this.loadingFileTranslation = true
translateFileRequest.onload = function() {
if (translateFileRequest.readyState === 4 && translateFileRequest.status === 200) {
try{
self.loadingFileTranslation = false;
let res = JSON.parse(this.response);
if (res.translatedFileUrl){
self.translatedFileUrl = res.translatedFileUrl;
let link = document.createElement("a");
link.target = "_blank";
link.href = self.translatedFileUrl;
link.click();
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingFileTranslation = false;
self.inputFile = false;
}
}else{
let res = JSON.parse(this.response);
self.error = res.error || "Unknown error";
self.loadingFileTranslation = false;
self.inputFile = false;
}
}
translateFileRequest.onerror = function() {
self.error = "Error while calling /translate_file";
self.loadingFileTranslation = false;
self.inputFile = false;
};
translateFileRequest.send(data);
}
}
});
});
/**
* @param {object} self
* @param {XMLHttpRequest} response
*/
function handleLangsResponse(self, response) {
if (response.status >= 200 && response.status < 400) {
self.langs = JSON.parse(response.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.langs.push({ name: "Auto Detect (Experimental)", code: "auto" })
const sourceLanguage = self.langs.find(l => l.code === self.getQueryParam("source"))
const targetLanguage = self.langs.find(l => l.code === self.getQueryParam("target"))
if (sourceLanguage) {
self.sourceLang = sourceLanguage.code
}
if (targetLanguage) {
self.targetLang = targetLanguage.code
}
const defaultText = self.getQueryParam("q")
if (defaultText) {
self.inputText = decodeURI(defaultText)
}
} else {
self.error = "Cannot load /languages";
}
self.loading = false;
}
/**
* @param {object} langDropdown
* @param {string} lang
*/
function updateSelectedAttribute(langDropdown, lang) {
for (const child of langDropdown.children) {
if (child.value === lang){
child.setAttribute('selected', '');
langDropdown.style.width = getTextWidth(child.text) + 24 + 'px';
} else{
child.removeAttribute('selected');
}
}
}
function getTextWidth(text) {
var canvas = getTextWidth.canvas || (getTextWidth.canvas = document.createElement("canvas"));
var ctx = canvas.getContext("2d");
ctx.font = 'bold 16px sans-serif';
var textWidth = Math.ceil(ctx.measureText(text).width);
return textWidth;
}
function setApiKey(){
var prevKey = localStorage.getItem("api_key") || "";
var newKey = "";
var instructions = "contact the server operator.";
if (window.getApiKeyLink) instructions = "press the \"Get API Key\" link."
newKey = window.prompt("Type in your API Key. If you need an API key, " + instructions, prevKey);
if (newKey === null) newKey = "";
localStorage.setItem("api_key", newKey);
}
// @license-end
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | Thanks for noticing & fixing @dingedi! | AnTheMaker | 5 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<link rel="preload" href="{{ url_for('static', filename='icon.svg') }}" as="image" />
<link rel="preload" href="{{ url_for('static', filename='js/vue@2.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/materialize.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/prism.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/app.js') }}?v={{ version }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='css/materialize.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/material-icons.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/prism.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/dark-theme.css') }}" as="style"/>
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="{{ url_for('static', filename='js/vue@2.js') }}"></script>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="{{ url_for('static', filename='css/materialize.min.css') }}">
<link rel="stylesheet" href="{{ url_for('static', filename='css/material-icons.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/prism.min.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/dark-theme.css') }}" />
</head>
<body class="white">
<header>
<nav class="blue darken-3" role="navigation">
<div class="nav-wrapper container">
<button data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></button>
<a id="logo-container" href="/" class="brand-logo">
<img src="{{ url_for('static', filename='icon.svg') }}" alt="Logo for LibreTranslate" class="logo">
<span>LibreTranslate</span>
</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
<script>window.getApiKeyLink = "{{ get_api_key_link }}";</script>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
</div>
</nav>
</header>
<main id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div id="translation-type-btns" class="s12 center" v-if="filesTranslation === true">
<button type="button" class="btn btn-switch-type" @click="switchType('text')" :class="{'active': translationType === 'text'}">
<i class="material-icons">title</i>
<span class="btn-text">Translate Text</span>
</button>
<button type="button" class="btn btn-switch-type" @click="switchType('files')" :class="{'active': translationType === 'files'}">
<i class="material-icons">description</i>
<span class="btn-text">Translate Files</span>
</button>
</div>
<form id="translation-form" class="col s12">
<div class="row mb-0">
<div class="col s6 language-select">
<span>Translate from</span>
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s6 language-select">
<a href="javascript:void(0)" @click="swapLangs" class="btn-switch-language">
<i class="material-icons">swap_horiz</i>
</a>
<span>Translate into</span>
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option v-if="option.code !== 'auto'" :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row" v-if="translationType === 'text'">
<div class="input-field textarea-container col s6">
<label for="textarea1" class="sr-only">
Text to translate
</label>
<textarea id="textarea1" v-model="inputText" @input="handleInput" ref="inputTextarea" dir="auto"></textarea>
<button class="btn-delete-text" title="Delete text" @click="deleteText">
<i class="material-icons">close</i>
</button>
<div class="characters-limit-container" v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field textarea-container col s6">
<label for="textarea2" class="sr-only">
Translated text
</label>
<textarea id="textarea2" v-model="translatedText" ref="translatedTextarea" dir="auto" v-bind:readonly="suggestions && !isSuggesting"></textarea>
<div class="actions">
<button v-if="suggestions && !loadingTranslation && inputText.length && !isSuggesting" class="btn-action" @click="suggestTranslation">
<i class="material-icons">edit</i>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" class="btn-action btn-blue" @click="closeSuggestTranslation">
<span>Cancel</span>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" :disabled="!canSendSuggestion" class="btn-action btn-blue" @click="sendSuggestion">
<span>Send</span>
</button>
<button v-if="!isSuggesting" class="btn-action btn-copy-translated" @click="copyText">
<span>[[ copyTextLabel ]]</span> <i class="material-icons">content_copy</i>
</button>
</div>
<div class="position-relative">
<div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
<div class="row" v-if="translationType === 'files'">
<div class="file-dropzone">
<div v-if="inputFile === false" class="dropzone-content">
<span>Supported file formats: [[ supportedFilesFormatFormatted ]]</span>
<form action="#">
<div class="file-field input-field">
<div class="btn">
<span>File</span>
<input type="file" :accept="supportedFilesFormatFormatted" @change="handleInputFile" ref="fileInputRef">
</div>
<div class="file-path-wrapper hidden">
<input class="file-path validate" type="text">
</div>
</div>
</form>
</div>
<div v-if="inputFile !== false" class="dropzone-content">
<div class="card">
<div class="card-content">
<div class="row mb-0">
<div class="col s12">
[[ inputFile.name ]]
<button v-if="loadingFileTranslation !== true" @click="removeFile" class="btn-flat">
<i class="material-icons">close</i>
</button>
</div>
</div>
</div>
</div>
<button @click="translateFile" v-if="translatedFileUrl === false && loadingFileTranslation === false" class="btn">Translate</button>
<a v-if="translatedFileUrl !== false" :href="translatedFileUrl" class="btn">Download</a>
<div class="progress" v-if="loadingFileTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
<div class="section no-pad-bot" v-if="translationType !== 'files'">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Request</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre>
</div>
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Response</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
{% if web_version %}
<div class="section no-pad-bot">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation API</h3>
<h4 class="header">100% Self-Hosted. Offline Capable. Easy to Setup.</h4>
<div id="download-btn-wrapper">
<a id="download-btn" class="waves-effect waves-light btn btn-large teal darken-2" href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">
<i class="material-icons">cloud_download</i>
<span class="btn-text">Download</span>
</a>
</div>
</div>
</div>
</div>
</div>
{% endif %}
</div>
</main>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l12 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p>License: <a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html" rel="noopener noreferrer">AGPLv3</a></p>
<p><a class="grey-text text-lighten-4" href="/javascript-licenses" rel="jslicense">JavaScript license information</a></p>
{% if web_version %}
<p>
This public API should be used for testing, personal or infrequent use. If you're going to run an application in production, please <a href="https://github.com/LibreTranslate/LibreTranslate" class="grey-text text-lighten-4" rel="noopener noreferrer">host your own server</a> or <a class="grey-text text-lighten-4" href="{{ get_api_key_link if get_api_key_link else 'https://github.com/LibreTranslate/LibreTranslate#mirrors' }}" rel="noopener noreferrer">get an API key</a>.
</p>
{% endif %}
</div>
</div>
</div>
<div class="footer-copyright center">
<p class="white-text">
Made with ❤ by <a class="white-text" href="https://github.com/LibreTranslate/LibreTranslate/graphs/contributors" rel="noopener noreferrer">LibreTranslate Contributors</a> and powered by <a class="white-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/" rel="noopener noreferrer">Argos Translate</a>
</p>
</div>
</footer>
<script src="{{ url_for('static', filename='js/materialize.min.js') }}"></script>
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
window.Prism = window.Prism || {};
window.Prism.manual = true;
// @license-end
</script>
<script src="{{ url_for('static', filename='js/prism.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/app.js') }}?v={{ version }}"></script>
</body>
</html>
| <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<link rel="preload" href="{{ url_for('static', filename='icon.svg') }}" as="image" />
<link rel="preload" href="{{ url_for('static', filename='js/vue@2.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/materialize.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/prism.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/app.js') }}?v={{ version }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='css/materialize.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/material-icons.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/prism.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/dark-theme.css') }}" as="style"/>
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="{{ url_for('static', filename='js/vue@2.js') }}"></script>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="{{ url_for('static', filename='css/materialize.min.css') }}">
<link rel="stylesheet" href="{{ url_for('static', filename='css/material-icons.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/prism.min.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/dark-theme.css') }}" />
</head>
<body class="white">
<header>
<nav class="blue darken-3" role="navigation">
<div class="nav-wrapper container">
<button data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></button>
<a id="logo-container" href="/" class="brand-logo">
<img src="{{ url_for('static', filename='icon.svg') }}" alt="Logo for LibreTranslate" class="logo">
<span>LibreTranslate</span>
</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
<script>window.getApiKeyLink = "{{ get_api_key_link }}";</script>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
</div>
</nav>
</header>
<main id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div id="translation-type-btns" class="s12 center" v-if="filesTranslation === true">
<button type="button" class="btn btn-switch-type" @click="switchType('text')" :class="{'active': translationType === 'text'}">
<i class="material-icons">title</i>
<span class="btn-text">Translate Text</span>
</button>
<button type="button" class="btn btn-switch-type" @click="switchType('files')" :class="{'active': translationType === 'files'}">
<i class="material-icons">description</i>
<span class="btn-text">Translate Files</span>
</button>
</div>
<form id="translation-form" class="col s12">
<div class="row mb-0">
<div class="col s6 language-select">
<span>Translate from</span>
<span><i>[[ detectedLangText ]]</i></span>
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s6 language-select">
<a href="javascript:void(0)" @click="swapLangs" class="btn-switch-language">
<i class="material-icons">swap_horiz</i>
</a>
<span>Translate into</span>
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option v-if="option.code !== 'auto'" :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row" v-if="translationType === 'text'">
<div class="input-field textarea-container col s6">
<label for="textarea1" class="sr-only">
Text to translate
</label>
<textarea id="textarea1" v-model="inputText" @input="handleInput" ref="inputTextarea" dir="auto"></textarea>
<button class="btn-delete-text" title="Delete text" @click="deleteText">
<i class="material-icons">close</i>
</button>
<div class="characters-limit-container" v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field textarea-container col s6">
<label for="textarea2" class="sr-only">
Translated text
</label>
<textarea id="textarea2" v-model="translatedText" ref="translatedTextarea" dir="auto" v-bind:readonly="suggestions && !isSuggesting"></textarea>
<div class="actions">
<button v-if="suggestions && !loadingTranslation && inputText.length && !isSuggesting" class="btn-action" @click="suggestTranslation">
<i class="material-icons">edit</i>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" class="btn-action btn-blue" @click="closeSuggestTranslation">
<span>Cancel</span>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" :disabled="!canSendSuggestion" class="btn-action btn-blue" @click="sendSuggestion">
<span>Send</span>
</button>
<button v-if="!isSuggesting" class="btn-action btn-copy-translated" @click="copyText">
<span>[[ copyTextLabel ]]</span> <i class="material-icons">content_copy</i>
</button>
</div>
<div class="position-relative">
<div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
<div class="row" v-if="translationType === 'files'">
<div class="file-dropzone">
<div v-if="inputFile === false" class="dropzone-content">
<span>Supported file formats: [[ supportedFilesFormatFormatted ]]</span>
<form action="#">
<div class="file-field input-field">
<div class="btn">
<span>File</span>
<input type="file" :accept="supportedFilesFormatFormatted" @change="handleInputFile" ref="fileInputRef">
</div>
<div class="file-path-wrapper hidden">
<input class="file-path validate" type="text">
</div>
</div>
</form>
</div>
<div v-if="inputFile !== false" class="dropzone-content">
<div class="card">
<div class="card-content">
<div class="row mb-0">
<div class="col s12">
[[ inputFile.name ]]
<button v-if="loadingFileTranslation !== true" @click="removeFile" class="btn-flat">
<i class="material-icons">close</i>
</button>
</div>
</div>
</div>
</div>
<button @click="translateFile" v-if="translatedFileUrl === false && loadingFileTranslation === false" class="btn">Translate</button>
<a v-if="translatedFileUrl !== false" :href="translatedFileUrl" class="btn">Download</a>
<div class="progress" v-if="loadingFileTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
<div class="section no-pad-bot" v-if="translationType !== 'files'">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Request</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre>
</div>
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Response</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
{% if web_version %}
<div class="section no-pad-bot">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation API</h3>
<h4 class="header">100% Self-Hosted. Offline Capable. Easy to Setup.</h4>
<div id="download-btn-wrapper">
<a id="download-btn" class="waves-effect waves-light btn btn-large teal darken-2" href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">
<i class="material-icons">cloud_download</i>
<span class="btn-text">Download</span>
</a>
</div>
</div>
</div>
</div>
</div>
{% endif %}
</div>
</main>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l12 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p>License: <a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html" rel="noopener noreferrer">AGPLv3</a></p>
<p><a class="grey-text text-lighten-4" href="/javascript-licenses" rel="jslicense">JavaScript license information</a></p>
{% if web_version %}
<p>
This public API should be used for testing, personal or infrequent use. If you're going to run an application in production, please <a href="https://github.com/LibreTranslate/LibreTranslate" class="grey-text text-lighten-4" rel="noopener noreferrer">host your own server</a> or <a class="grey-text text-lighten-4" href="{{ get_api_key_link if get_api_key_link else 'https://github.com/LibreTranslate/LibreTranslate#mirrors' }}" rel="noopener noreferrer">get an API key</a>.
</p>
{% endif %}
</div>
</div>
</div>
<div class="footer-copyright center">
<p class="white-text">
Made with ❤ by <a class="white-text" href="https://github.com/LibreTranslate/LibreTranslate/graphs/contributors" rel="noopener noreferrer">LibreTranslate Contributors</a> and powered by <a class="white-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/" rel="noopener noreferrer">Argos Translate</a>
</p>
</div>
</footer>
<script src="{{ url_for('static', filename='js/materialize.min.js') }}"></script>
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
window.Prism = window.Prism || {};
window.Prism.manual = true;
// @license-end
</script>
<script src="{{ url_for('static', filename='js/prism.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/app.js') }}?v={{ version }}"></script>
</body>
</html>
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | I think it should rather display the language and not the ISO code | dingedi | 6 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<link rel="preload" href="{{ url_for('static', filename='icon.svg') }}" as="image" />
<link rel="preload" href="{{ url_for('static', filename='js/vue@2.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/materialize.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/prism.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/app.js') }}?v={{ version }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='css/materialize.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/material-icons.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/prism.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/dark-theme.css') }}" as="style"/>
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="{{ url_for('static', filename='js/vue@2.js') }}"></script>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="{{ url_for('static', filename='css/materialize.min.css') }}">
<link rel="stylesheet" href="{{ url_for('static', filename='css/material-icons.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/prism.min.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/dark-theme.css') }}" />
</head>
<body class="white">
<header>
<nav class="blue darken-3" role="navigation">
<div class="nav-wrapper container">
<button data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></button>
<a id="logo-container" href="/" class="brand-logo">
<img src="{{ url_for('static', filename='icon.svg') }}" alt="Logo for LibreTranslate" class="logo">
<span>LibreTranslate</span>
</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
<script>window.getApiKeyLink = "{{ get_api_key_link }}";</script>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
</div>
</nav>
</header>
<main id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div id="translation-type-btns" class="s12 center" v-if="filesTranslation === true">
<button type="button" class="btn btn-switch-type" @click="switchType('text')" :class="{'active': translationType === 'text'}">
<i class="material-icons">title</i>
<span class="btn-text">Translate Text</span>
</button>
<button type="button" class="btn btn-switch-type" @click="switchType('files')" :class="{'active': translationType === 'files'}">
<i class="material-icons">description</i>
<span class="btn-text">Translate Files</span>
</button>
</div>
<form id="translation-form" class="col s12">
<div class="row mb-0">
<div class="col s6 language-select">
<span>Translate from</span>
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s6 language-select">
<a href="javascript:void(0)" @click="swapLangs" class="btn-switch-language">
<i class="material-icons">swap_horiz</i>
</a>
<span>Translate into</span>
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option v-if="option.code !== 'auto'" :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row" v-if="translationType === 'text'">
<div class="input-field textarea-container col s6">
<label for="textarea1" class="sr-only">
Text to translate
</label>
<textarea id="textarea1" v-model="inputText" @input="handleInput" ref="inputTextarea" dir="auto"></textarea>
<button class="btn-delete-text" title="Delete text" @click="deleteText">
<i class="material-icons">close</i>
</button>
<div class="characters-limit-container" v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field textarea-container col s6">
<label for="textarea2" class="sr-only">
Translated text
</label>
<textarea id="textarea2" v-model="translatedText" ref="translatedTextarea" dir="auto" v-bind:readonly="suggestions && !isSuggesting"></textarea>
<div class="actions">
<button v-if="suggestions && !loadingTranslation && inputText.length && !isSuggesting" class="btn-action" @click="suggestTranslation">
<i class="material-icons">edit</i>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" class="btn-action btn-blue" @click="closeSuggestTranslation">
<span>Cancel</span>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" :disabled="!canSendSuggestion" class="btn-action btn-blue" @click="sendSuggestion">
<span>Send</span>
</button>
<button v-if="!isSuggesting" class="btn-action btn-copy-translated" @click="copyText">
<span>[[ copyTextLabel ]]</span> <i class="material-icons">content_copy</i>
</button>
</div>
<div class="position-relative">
<div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
<div class="row" v-if="translationType === 'files'">
<div class="file-dropzone">
<div v-if="inputFile === false" class="dropzone-content">
<span>Supported file formats: [[ supportedFilesFormatFormatted ]]</span>
<form action="#">
<div class="file-field input-field">
<div class="btn">
<span>File</span>
<input type="file" :accept="supportedFilesFormatFormatted" @change="handleInputFile" ref="fileInputRef">
</div>
<div class="file-path-wrapper hidden">
<input class="file-path validate" type="text">
</div>
</div>
</form>
</div>
<div v-if="inputFile !== false" class="dropzone-content">
<div class="card">
<div class="card-content">
<div class="row mb-0">
<div class="col s12">
[[ inputFile.name ]]
<button v-if="loadingFileTranslation !== true" @click="removeFile" class="btn-flat">
<i class="material-icons">close</i>
</button>
</div>
</div>
</div>
</div>
<button @click="translateFile" v-if="translatedFileUrl === false && loadingFileTranslation === false" class="btn">Translate</button>
<a v-if="translatedFileUrl !== false" :href="translatedFileUrl" class="btn">Download</a>
<div class="progress" v-if="loadingFileTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
<div class="section no-pad-bot" v-if="translationType !== 'files'">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Request</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre>
</div>
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Response</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
{% if web_version %}
<div class="section no-pad-bot">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation API</h3>
<h4 class="header">100% Self-Hosted. Offline Capable. Easy to Setup.</h4>
<div id="download-btn-wrapper">
<a id="download-btn" class="waves-effect waves-light btn btn-large teal darken-2" href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">
<i class="material-icons">cloud_download</i>
<span class="btn-text">Download</span>
</a>
</div>
</div>
</div>
</div>
</div>
{% endif %}
</div>
</main>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l12 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p>License: <a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html" rel="noopener noreferrer">AGPLv3</a></p>
<p><a class="grey-text text-lighten-4" href="/javascript-licenses" rel="jslicense">JavaScript license information</a></p>
{% if web_version %}
<p>
This public API should be used for testing, personal or infrequent use. If you're going to run an application in production, please <a href="https://github.com/LibreTranslate/LibreTranslate" class="grey-text text-lighten-4" rel="noopener noreferrer">host your own server</a> or <a class="grey-text text-lighten-4" href="{{ get_api_key_link if get_api_key_link else 'https://github.com/LibreTranslate/LibreTranslate#mirrors' }}" rel="noopener noreferrer">get an API key</a>.
</p>
{% endif %}
</div>
</div>
</div>
<div class="footer-copyright center">
<p class="white-text">
Made with ❤ by <a class="white-text" href="https://github.com/LibreTranslate/LibreTranslate/graphs/contributors" rel="noopener noreferrer">LibreTranslate Contributors</a> and powered by <a class="white-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/" rel="noopener noreferrer">Argos Translate</a>
</p>
</div>
</footer>
<script src="{{ url_for('static', filename='js/materialize.min.js') }}"></script>
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
window.Prism = window.Prism || {};
window.Prism.manual = true;
// @license-end
</script>
<script src="{{ url_for('static', filename='js/prism.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/app.js') }}?v={{ version }}"></script>
</body>
</html>
| <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<link rel="preload" href="{{ url_for('static', filename='icon.svg') }}" as="image" />
<link rel="preload" href="{{ url_for('static', filename='js/vue@2.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/materialize.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/prism.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/app.js') }}?v={{ version }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='css/materialize.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/material-icons.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/prism.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/dark-theme.css') }}" as="style"/>
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="{{ url_for('static', filename='js/vue@2.js') }}"></script>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="{{ url_for('static', filename='css/materialize.min.css') }}">
<link rel="stylesheet" href="{{ url_for('static', filename='css/material-icons.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/prism.min.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/dark-theme.css') }}" />
</head>
<body class="white">
<header>
<nav class="blue darken-3" role="navigation">
<div class="nav-wrapper container">
<button data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></button>
<a id="logo-container" href="/" class="brand-logo">
<img src="{{ url_for('static', filename='icon.svg') }}" alt="Logo for LibreTranslate" class="logo">
<span>LibreTranslate</span>
</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
<script>window.getApiKeyLink = "{{ get_api_key_link }}";</script>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
</div>
</nav>
</header>
<main id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div id="translation-type-btns" class="s12 center" v-if="filesTranslation === true">
<button type="button" class="btn btn-switch-type" @click="switchType('text')" :class="{'active': translationType === 'text'}">
<i class="material-icons">title</i>
<span class="btn-text">Translate Text</span>
</button>
<button type="button" class="btn btn-switch-type" @click="switchType('files')" :class="{'active': translationType === 'files'}">
<i class="material-icons">description</i>
<span class="btn-text">Translate Files</span>
</button>
</div>
<form id="translation-form" class="col s12">
<div class="row mb-0">
<div class="col s6 language-select">
<span>Translate from</span>
<span><i>[[ detectedLangText ]]</i></span>
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s6 language-select">
<a href="javascript:void(0)" @click="swapLangs" class="btn-switch-language">
<i class="material-icons">swap_horiz</i>
</a>
<span>Translate into</span>
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option v-if="option.code !== 'auto'" :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row" v-if="translationType === 'text'">
<div class="input-field textarea-container col s6">
<label for="textarea1" class="sr-only">
Text to translate
</label>
<textarea id="textarea1" v-model="inputText" @input="handleInput" ref="inputTextarea" dir="auto"></textarea>
<button class="btn-delete-text" title="Delete text" @click="deleteText">
<i class="material-icons">close</i>
</button>
<div class="characters-limit-container" v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field textarea-container col s6">
<label for="textarea2" class="sr-only">
Translated text
</label>
<textarea id="textarea2" v-model="translatedText" ref="translatedTextarea" dir="auto" v-bind:readonly="suggestions && !isSuggesting"></textarea>
<div class="actions">
<button v-if="suggestions && !loadingTranslation && inputText.length && !isSuggesting" class="btn-action" @click="suggestTranslation">
<i class="material-icons">edit</i>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" class="btn-action btn-blue" @click="closeSuggestTranslation">
<span>Cancel</span>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" :disabled="!canSendSuggestion" class="btn-action btn-blue" @click="sendSuggestion">
<span>Send</span>
</button>
<button v-if="!isSuggesting" class="btn-action btn-copy-translated" @click="copyText">
<span>[[ copyTextLabel ]]</span> <i class="material-icons">content_copy</i>
</button>
</div>
<div class="position-relative">
<div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
<div class="row" v-if="translationType === 'files'">
<div class="file-dropzone">
<div v-if="inputFile === false" class="dropzone-content">
<span>Supported file formats: [[ supportedFilesFormatFormatted ]]</span>
<form action="#">
<div class="file-field input-field">
<div class="btn">
<span>File</span>
<input type="file" :accept="supportedFilesFormatFormatted" @change="handleInputFile" ref="fileInputRef">
</div>
<div class="file-path-wrapper hidden">
<input class="file-path validate" type="text">
</div>
</div>
</form>
</div>
<div v-if="inputFile !== false" class="dropzone-content">
<div class="card">
<div class="card-content">
<div class="row mb-0">
<div class="col s12">
[[ inputFile.name ]]
<button v-if="loadingFileTranslation !== true" @click="removeFile" class="btn-flat">
<i class="material-icons">close</i>
</button>
</div>
</div>
</div>
</div>
<button @click="translateFile" v-if="translatedFileUrl === false && loadingFileTranslation === false" class="btn">Translate</button>
<a v-if="translatedFileUrl !== false" :href="translatedFileUrl" class="btn">Download</a>
<div class="progress" v-if="loadingFileTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
<div class="section no-pad-bot" v-if="translationType !== 'files'">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Request</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre>
</div>
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Response</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
{% if web_version %}
<div class="section no-pad-bot">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation API</h3>
<h4 class="header">100% Self-Hosted. Offline Capable. Easy to Setup.</h4>
<div id="download-btn-wrapper">
<a id="download-btn" class="waves-effect waves-light btn btn-large teal darken-2" href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">
<i class="material-icons">cloud_download</i>
<span class="btn-text">Download</span>
</a>
</div>
</div>
</div>
</div>
</div>
{% endif %}
</div>
</main>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l12 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p>License: <a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html" rel="noopener noreferrer">AGPLv3</a></p>
<p><a class="grey-text text-lighten-4" href="/javascript-licenses" rel="jslicense">JavaScript license information</a></p>
{% if web_version %}
<p>
This public API should be used for testing, personal or infrequent use. If you're going to run an application in production, please <a href="https://github.com/LibreTranslate/LibreTranslate" class="grey-text text-lighten-4" rel="noopener noreferrer">host your own server</a> or <a class="grey-text text-lighten-4" href="{{ get_api_key_link if get_api_key_link else 'https://github.com/LibreTranslate/LibreTranslate#mirrors' }}" rel="noopener noreferrer">get an API key</a>.
</p>
{% endif %}
</div>
</div>
</div>
<div class="footer-copyright center">
<p class="white-text">
Made with ❤ by <a class="white-text" href="https://github.com/LibreTranslate/LibreTranslate/graphs/contributors" rel="noopener noreferrer">LibreTranslate Contributors</a> and powered by <a class="white-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/" rel="noopener noreferrer">Argos Translate</a>
</p>
</div>
</footer>
<script src="{{ url_for('static', filename='js/materialize.min.js') }}"></script>
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
window.Prism = window.Prism || {};
window.Prism.manual = true;
// @license-end
</script>
<script src="{{ url_for('static', filename='js/prism.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/app.js') }}?v={{ version }}"></script>
</body>
</html>
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | done in https://github.com/LibreTranslate/LibreTranslate/pull/324 | dingedi | 7 |
LibreTranslate/LibreTranslate | 323 | Show detected Language (#314) | Hi there!
This is a minimal implementation of #314.
It should print the detected language as well as the confidence when using "Auto Detect" as the source language.
Feel free to see this as a rough starting-point - please add your own suggestions and changes to this PR!
~ An | null | 2022-10-01 12:28:38+00:00 | 2022-10-01 14:40:12+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<link rel="preload" href="{{ url_for('static', filename='icon.svg') }}" as="image" />
<link rel="preload" href="{{ url_for('static', filename='js/vue@2.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/materialize.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/prism.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/app.js') }}?v={{ version }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='css/materialize.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/material-icons.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/prism.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/dark-theme.css') }}" as="style"/>
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="{{ url_for('static', filename='js/vue@2.js') }}"></script>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="{{ url_for('static', filename='css/materialize.min.css') }}">
<link rel="stylesheet" href="{{ url_for('static', filename='css/material-icons.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/prism.min.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/dark-theme.css') }}" />
</head>
<body class="white">
<header>
<nav class="blue darken-3" role="navigation">
<div class="nav-wrapper container">
<button data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></button>
<a id="logo-container" href="/" class="brand-logo">
<img src="{{ url_for('static', filename='icon.svg') }}" alt="Logo for LibreTranslate" class="logo">
<span>LibreTranslate</span>
</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
<script>window.getApiKeyLink = "{{ get_api_key_link }}";</script>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
</div>
</nav>
</header>
<main id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div id="translation-type-btns" class="s12 center" v-if="filesTranslation === true">
<button type="button" class="btn btn-switch-type" @click="switchType('text')" :class="{'active': translationType === 'text'}">
<i class="material-icons">title</i>
<span class="btn-text">Translate Text</span>
</button>
<button type="button" class="btn btn-switch-type" @click="switchType('files')" :class="{'active': translationType === 'files'}">
<i class="material-icons">description</i>
<span class="btn-text">Translate Files</span>
</button>
</div>
<form id="translation-form" class="col s12">
<div class="row mb-0">
<div class="col s6 language-select">
<span>Translate from</span>
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s6 language-select">
<a href="javascript:void(0)" @click="swapLangs" class="btn-switch-language">
<i class="material-icons">swap_horiz</i>
</a>
<span>Translate into</span>
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option v-if="option.code !== 'auto'" :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row" v-if="translationType === 'text'">
<div class="input-field textarea-container col s6">
<label for="textarea1" class="sr-only">
Text to translate
</label>
<textarea id="textarea1" v-model="inputText" @input="handleInput" ref="inputTextarea" dir="auto"></textarea>
<button class="btn-delete-text" title="Delete text" @click="deleteText">
<i class="material-icons">close</i>
</button>
<div class="characters-limit-container" v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field textarea-container col s6">
<label for="textarea2" class="sr-only">
Translated text
</label>
<textarea id="textarea2" v-model="translatedText" ref="translatedTextarea" dir="auto" v-bind:readonly="suggestions && !isSuggesting"></textarea>
<div class="actions">
<button v-if="suggestions && !loadingTranslation && inputText.length && !isSuggesting" class="btn-action" @click="suggestTranslation">
<i class="material-icons">edit</i>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" class="btn-action btn-blue" @click="closeSuggestTranslation">
<span>Cancel</span>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" :disabled="!canSendSuggestion" class="btn-action btn-blue" @click="sendSuggestion">
<span>Send</span>
</button>
<button v-if="!isSuggesting" class="btn-action btn-copy-translated" @click="copyText">
<span>[[ copyTextLabel ]]</span> <i class="material-icons">content_copy</i>
</button>
</div>
<div class="position-relative">
<div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
<div class="row" v-if="translationType === 'files'">
<div class="file-dropzone">
<div v-if="inputFile === false" class="dropzone-content">
<span>Supported file formats: [[ supportedFilesFormatFormatted ]]</span>
<form action="#">
<div class="file-field input-field">
<div class="btn">
<span>File</span>
<input type="file" :accept="supportedFilesFormatFormatted" @change="handleInputFile" ref="fileInputRef">
</div>
<div class="file-path-wrapper hidden">
<input class="file-path validate" type="text">
</div>
</div>
</form>
</div>
<div v-if="inputFile !== false" class="dropzone-content">
<div class="card">
<div class="card-content">
<div class="row mb-0">
<div class="col s12">
[[ inputFile.name ]]
<button v-if="loadingFileTranslation !== true" @click="removeFile" class="btn-flat">
<i class="material-icons">close</i>
</button>
</div>
</div>
</div>
</div>
<button @click="translateFile" v-if="translatedFileUrl === false && loadingFileTranslation === false" class="btn">Translate</button>
<a v-if="translatedFileUrl !== false" :href="translatedFileUrl" class="btn">Download</a>
<div class="progress" v-if="loadingFileTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
<div class="section no-pad-bot" v-if="translationType !== 'files'">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Request</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre>
</div>
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Response</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
{% if web_version %}
<div class="section no-pad-bot">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation API</h3>
<h4 class="header">100% Self-Hosted. Offline Capable. Easy to Setup.</h4>
<div id="download-btn-wrapper">
<a id="download-btn" class="waves-effect waves-light btn btn-large teal darken-2" href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">
<i class="material-icons">cloud_download</i>
<span class="btn-text">Download</span>
</a>
</div>
</div>
</div>
</div>
</div>
{% endif %}
</div>
</main>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l12 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p>License: <a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html" rel="noopener noreferrer">AGPLv3</a></p>
<p><a class="grey-text text-lighten-4" href="/javascript-licenses" rel="jslicense">JavaScript license information</a></p>
{% if web_version %}
<p>
This public API should be used for testing, personal or infrequent use. If you're going to run an application in production, please <a href="https://github.com/LibreTranslate/LibreTranslate" class="grey-text text-lighten-4" rel="noopener noreferrer">host your own server</a> or <a class="grey-text text-lighten-4" href="{{ get_api_key_link if get_api_key_link else 'https://github.com/LibreTranslate/LibreTranslate#mirrors' }}" rel="noopener noreferrer">get an API key</a>.
</p>
{% endif %}
</div>
</div>
</div>
<div class="footer-copyright center">
<p class="white-text">
Made with ❤ by <a class="white-text" href="https://github.com/LibreTranslate/LibreTranslate/graphs/contributors" rel="noopener noreferrer">LibreTranslate Contributors</a> and powered by <a class="white-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/" rel="noopener noreferrer">Argos Translate</a>
</p>
</div>
</footer>
<script src="{{ url_for('static', filename='js/materialize.min.js') }}"></script>
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
window.Prism = window.Prism || {};
window.Prism.manual = true;
// @license-end
</script>
<script src="{{ url_for('static', filename='js/prism.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/app.js') }}?v={{ version }}"></script>
</body>
</html>
| <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, offline capable and easy to setup. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<link rel="preload" href="{{ url_for('static', filename='icon.svg') }}" as="image" />
<link rel="preload" href="{{ url_for('static', filename='js/vue@2.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/materialize.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/prism.min.js') }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='js/app.js') }}?v={{ version }}" as="script">
<link rel="preload" href="{{ url_for('static', filename='css/materialize.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/material-icons.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/prism.min.css') }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" as="style"/>
<link rel="preload" href="{{ url_for('static', filename='css/dark-theme.css') }}" as="style"/>
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="{{ url_for('static', filename='js/vue@2.js') }}"></script>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="{{ url_for('static', filename='css/materialize.min.css') }}">
<link rel="stylesheet" href="{{ url_for('static', filename='css/material-icons.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/prism.min.css') }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}?v={{ version }}" />
<link rel="stylesheet" href="{{ url_for('static', filename='css/dark-theme.css') }}" />
</head>
<body class="white">
<header>
<nav class="blue darken-3" role="navigation">
<div class="nav-wrapper container">
<button data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></button>
<a id="logo-container" href="/" class="brand-logo">
<img src="{{ url_for('static', filename='icon.svg') }}" alt="Logo for LibreTranslate" class="logo">
<span>LibreTranslate</span>
</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
<script>window.getApiKeyLink = "{{ get_api_key_link }}";</script>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
{% if get_api_key_link %}
<li><a href="{{ get_api_key_link }}">Get API Key</a></li>
{% endif %}
<li><a href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">GitHub</a></li>
{% if api_keys %}
<li><a href="javascript:setApiKey()" title="Set API Key"><i class="material-icons">vpn_key</i></a></li>
{% endif %}
</ul>
</div>
</nav>
</header>
<main id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div id="translation-type-btns" class="s12 center" v-if="filesTranslation === true">
<button type="button" class="btn btn-switch-type" @click="switchType('text')" :class="{'active': translationType === 'text'}">
<i class="material-icons">title</i>
<span class="btn-text">Translate Text</span>
</button>
<button type="button" class="btn btn-switch-type" @click="switchType('files')" :class="{'active': translationType === 'files'}">
<i class="material-icons">description</i>
<span class="btn-text">Translate Files</span>
</button>
</div>
<form id="translation-form" class="col s12">
<div class="row mb-0">
<div class="col s6 language-select">
<span>Translate from</span>
<span><i>[[ detectedLangText ]]</i></span>
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s6 language-select">
<a href="javascript:void(0)" @click="swapLangs" class="btn-switch-language">
<i class="material-icons">swap_horiz</i>
</a>
<span>Translate into</span>
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option v-if="option.code !== 'auto'" :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row" v-if="translationType === 'text'">
<div class="input-field textarea-container col s6">
<label for="textarea1" class="sr-only">
Text to translate
</label>
<textarea id="textarea1" v-model="inputText" @input="handleInput" ref="inputTextarea" dir="auto"></textarea>
<button class="btn-delete-text" title="Delete text" @click="deleteText">
<i class="material-icons">close</i>
</button>
<div class="characters-limit-container" v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field textarea-container col s6">
<label for="textarea2" class="sr-only">
Translated text
</label>
<textarea id="textarea2" v-model="translatedText" ref="translatedTextarea" dir="auto" v-bind:readonly="suggestions && !isSuggesting"></textarea>
<div class="actions">
<button v-if="suggestions && !loadingTranslation && inputText.length && !isSuggesting" class="btn-action" @click="suggestTranslation">
<i class="material-icons">edit</i>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" class="btn-action btn-blue" @click="closeSuggestTranslation">
<span>Cancel</span>
</button>
<button v-if="suggestions && !loadingTranslation && inputText.length && isSuggesting" :disabled="!canSendSuggestion" class="btn-action btn-blue" @click="sendSuggestion">
<span>Send</span>
</button>
<button v-if="!isSuggesting" class="btn-action btn-copy-translated" @click="copyText">
<span>[[ copyTextLabel ]]</span> <i class="material-icons">content_copy</i>
</button>
</div>
<div class="position-relative">
<div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
<div class="row" v-if="translationType === 'files'">
<div class="file-dropzone">
<div v-if="inputFile === false" class="dropzone-content">
<span>Supported file formats: [[ supportedFilesFormatFormatted ]]</span>
<form action="#">
<div class="file-field input-field">
<div class="btn">
<span>File</span>
<input type="file" :accept="supportedFilesFormatFormatted" @change="handleInputFile" ref="fileInputRef">
</div>
<div class="file-path-wrapper hidden">
<input class="file-path validate" type="text">
</div>
</div>
</form>
</div>
<div v-if="inputFile !== false" class="dropzone-content">
<div class="card">
<div class="card-content">
<div class="row mb-0">
<div class="col s12">
[[ inputFile.name ]]
<button v-if="loadingFileTranslation !== true" @click="removeFile" class="btn-flat">
<i class="material-icons">close</i>
</button>
</div>
</div>
</div>
</div>
<button @click="translateFile" v-if="translatedFileUrl === false && loadingFileTranslation === false" class="btn">Translate</button>
<a v-if="translatedFileUrl !== false" :href="translatedFileUrl" class="btn">Download</a>
<div class="progress" v-if="loadingFileTranslation">
<div class="indeterminate"></div>
</div>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
<div class="section no-pad-bot" v-if="translationType !== 'files'">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Request</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre>
</div>
<div class="col s12 m12 l6 left-align">
<p class="mb-0">Response</p>
<pre class="code mt-0"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
{% if web_version %}
<div class="section no-pad-bot">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation API</h3>
<h4 class="header">100% Self-Hosted. Offline Capable. Easy to Setup.</h4>
<div id="download-btn-wrapper">
<a id="download-btn" class="waves-effect waves-light btn btn-large teal darken-2" href="https://github.com/LibreTranslate/LibreTranslate" rel="noopener noreferrer">
<i class="material-icons">cloud_download</i>
<span class="btn-text">Download</span>
</a>
</div>
</div>
</div>
</div>
</div>
{% endif %}
</div>
</main>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l12 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p>License: <a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html" rel="noopener noreferrer">AGPLv3</a></p>
<p><a class="grey-text text-lighten-4" href="/javascript-licenses" rel="jslicense">JavaScript license information</a></p>
{% if web_version %}
<p>
This public API should be used for testing, personal or infrequent use. If you're going to run an application in production, please <a href="https://github.com/LibreTranslate/LibreTranslate" class="grey-text text-lighten-4" rel="noopener noreferrer">host your own server</a> or <a class="grey-text text-lighten-4" href="{{ get_api_key_link if get_api_key_link else 'https://github.com/LibreTranslate/LibreTranslate#mirrors' }}" rel="noopener noreferrer">get an API key</a>.
</p>
{% endif %}
</div>
</div>
</div>
<div class="footer-copyright center">
<p class="white-text">
Made with ❤ by <a class="white-text" href="https://github.com/LibreTranslate/LibreTranslate/graphs/contributors" rel="noopener noreferrer">LibreTranslate Contributors</a> and powered by <a class="white-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/" rel="noopener noreferrer">Argos Translate</a>
</p>
</div>
</footer>
<script src="{{ url_for('static', filename='js/materialize.min.js') }}"></script>
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-3.0
window.Prism = window.Prism || {};
window.Prism.manual = true;
// @license-end
</script>
<script src="{{ url_for('static', filename='js/prism.min.js') }}"></script>
<script src="{{ url_for('static', filename='js/app.js') }}?v={{ version }}"></script>
</body>
</html>
| AnTheMaker | 36e05596aaf724ec555757b6fb42f91a13891759 | 7c37681afc7231f46ad692fdee0398a72f72a5a7 | Perfect! Was working on this improvement too at the moment, but yours is perfect! Thank you! :) | AnTheMaker | 8 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
from translatehtml import translate_html
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier = 1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=pkg_resources.require("LibreTranslate")[0].version
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.2.1"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from translatehtml import translate_html
from werkzeug.utils import secure_filename
from app import flood, remove_translated_files, security
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
def get_version():
try:
with open("VERSION") as f:
return f.read().strip()
except:
return "?"
def get_upload_dir():
upload_dir = os.path.join(tempfile.gettempdir(), "libretranslate-files-translate")
if not os.path.isdir(upload_dir):
os.mkdir(upload_dir)
return upload_dir
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier=1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
if not args.disable_files_translation:
remove_translated_files.setup(get_upload_dir())
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
frontend_argos_supported_files_format = []
for file_format in get_supported_formats():
for ff in file_format.supported_file_extensions:
frontend_argos_supported_files_format.append(ff)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=get_version()
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/translate_file", methods=["POST"])
@access_check
def translate_file():
"""
Translate file from a language to another
---
tags:
- translate
consumes:
- multipart/form-data
parameters:
- in: formData
name: file
type: file
required: true
description: File to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated file
schema:
id: translate
type: object
properties:
translatedFileUrl:
type: string
description: Translated file url
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if args.disable_files_translation:
abort(403, description="Files translation are disabled on this server.")
source_lang = request.form.get("source")
target_lang = request.form.get("target")
file = request.files['file']
if not file:
abort(400, description="Invalid request: missing file parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if file.filename == '':
abort(400, description="Invalid request: empty file")
if os.path.splitext(file.filename)[1] not in frontend_argos_supported_files_format:
abort(400, description="Invalid request: file format not supported")
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
try:
filename = str(uuid.uuid4()) + '.' + secure_filename(file.filename)
filepath = os.path.join(get_upload_dir(), filename)
file.save(filepath)
translated_file_path = argostranslatefiles.translate_file(src_langs[0].get_translation(tgt_lang), filepath)
translated_filename = os.path.basename(translated_file_path)
return jsonify(
{
"translatedFileUrl": url_for('download_file', filename=translated_filename, _external=True)
}
)
except Exception as e:
abort(500, description=e)
@app.route("/download_file/<string:filename>", methods=["GET"])
@access_check
def download_file(filename: str):
"""
Download a translated file
"""
if args.disable_files_translation:
abort(400, description="Files translation are disabled on this server.")
filepath = os.path.join(get_upload_dir(), filename)
try:
checked_filepath = security.path_traversal_check(filepath, get_upload_dir())
if os.path.isfile(checked_filepath):
filepath = checked_filepath
except security.SuspiciousFileOperation:
abort(400, description="Invalid filename")
return_data = io.BytesIO()
with open(filepath, 'rb') as fo:
return_data.write(fo.read())
return_data.seek(0)
download_filename = filename.split('.')
download_filename.pop(0)
download_filename = '.'.join(download_filename)
return send_file(return_data, as_attachment=True, attachment_filename=download_filename)
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
supportedFilesFormat:
type: array
items:
type: string
description: Supported files format
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"filesTranslation": not args.disable_files_translation,
"supportedFilesFormat": [] if args.disable_files_translation else frontend_argos_supported_files_format,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.3.0"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | This is susceptible to a path traversal attack: https://owasp.org/www-community/attacks/Path_Traversal | pierotofy | 9 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
from translatehtml import translate_html
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier = 1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=pkg_resources.require("LibreTranslate")[0].version
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.2.1"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from translatehtml import translate_html
from werkzeug.utils import secure_filename
from app import flood, remove_translated_files, security
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
def get_version():
try:
with open("VERSION") as f:
return f.read().strip()
except:
return "?"
def get_upload_dir():
upload_dir = os.path.join(tempfile.gettempdir(), "libretranslate-files-translate")
if not os.path.isdir(upload_dir):
os.mkdir(upload_dir)
return upload_dir
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier=1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
if not args.disable_files_translation:
remove_translated_files.setup(get_upload_dir())
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
frontend_argos_supported_files_format = []
for file_format in get_supported_formats():
for ff in file_format.supported_file_extensions:
frontend_argos_supported_files_format.append(ff)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=get_version()
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/translate_file", methods=["POST"])
@access_check
def translate_file():
"""
Translate file from a language to another
---
tags:
- translate
consumes:
- multipart/form-data
parameters:
- in: formData
name: file
type: file
required: true
description: File to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated file
schema:
id: translate
type: object
properties:
translatedFileUrl:
type: string
description: Translated file url
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if args.disable_files_translation:
abort(403, description="Files translation are disabled on this server.")
source_lang = request.form.get("source")
target_lang = request.form.get("target")
file = request.files['file']
if not file:
abort(400, description="Invalid request: missing file parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if file.filename == '':
abort(400, description="Invalid request: empty file")
if os.path.splitext(file.filename)[1] not in frontend_argos_supported_files_format:
abort(400, description="Invalid request: file format not supported")
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
try:
filename = str(uuid.uuid4()) + '.' + secure_filename(file.filename)
filepath = os.path.join(get_upload_dir(), filename)
file.save(filepath)
translated_file_path = argostranslatefiles.translate_file(src_langs[0].get_translation(tgt_lang), filepath)
translated_filename = os.path.basename(translated_file_path)
return jsonify(
{
"translatedFileUrl": url_for('download_file', filename=translated_filename, _external=True)
}
)
except Exception as e:
abort(500, description=e)
@app.route("/download_file/<string:filename>", methods=["GET"])
@access_check
def download_file(filename: str):
"""
Download a translated file
"""
if args.disable_files_translation:
abort(400, description="Files translation are disabled on this server.")
filepath = os.path.join(get_upload_dir(), filename)
try:
checked_filepath = security.path_traversal_check(filepath, get_upload_dir())
if os.path.isfile(checked_filepath):
filepath = checked_filepath
except security.SuspiciousFileOperation:
abort(400, description="Invalid filename")
return_data = io.BytesIO()
with open(filepath, 'rb') as fo:
return_data.write(fo.read())
return_data.seek(0)
download_filename = filename.split('.')
download_filename.pop(0)
download_filename = '.'.join(download_filename)
return send_file(return_data, as_attachment=True, attachment_filename=download_filename)
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
supportedFilesFormat:
type: array
items:
type: string
description: Supported files format
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"filesTranslation": not args.disable_files_translation,
"supportedFilesFormat": [] if args.disable_files_translation else frontend_argos_supported_files_format,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.3.0"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | See an example for protecting against it: https://github.com/OpenDroneMap/WebODM/blob/4ac9c44972b49a6caf8472022dc84a9dde1a6eae/app/security.py
https://github.com/OpenDroneMap/WebODM/blob/54ee8f898d06b5e16f33d910b9ee21db6f0bc5a0/app/models/task.py#L473-L479 | pierotofy | 10 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
from translatehtml import translate_html
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier = 1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=pkg_resources.require("LibreTranslate")[0].version
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.2.1"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from translatehtml import translate_html
from werkzeug.utils import secure_filename
from app import flood, remove_translated_files, security
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
def get_version():
try:
with open("VERSION") as f:
return f.read().strip()
except:
return "?"
def get_upload_dir():
upload_dir = os.path.join(tempfile.gettempdir(), "libretranslate-files-translate")
if not os.path.isdir(upload_dir):
os.mkdir(upload_dir)
return upload_dir
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier=1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
if not args.disable_files_translation:
remove_translated_files.setup(get_upload_dir())
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
frontend_argos_supported_files_format = []
for file_format in get_supported_formats():
for ff in file_format.supported_file_extensions:
frontend_argos_supported_files_format.append(ff)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=get_version()
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/translate_file", methods=["POST"])
@access_check
def translate_file():
"""
Translate file from a language to another
---
tags:
- translate
consumes:
- multipart/form-data
parameters:
- in: formData
name: file
type: file
required: true
description: File to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated file
schema:
id: translate
type: object
properties:
translatedFileUrl:
type: string
description: Translated file url
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if args.disable_files_translation:
abort(403, description="Files translation are disabled on this server.")
source_lang = request.form.get("source")
target_lang = request.form.get("target")
file = request.files['file']
if not file:
abort(400, description="Invalid request: missing file parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if file.filename == '':
abort(400, description="Invalid request: empty file")
if os.path.splitext(file.filename)[1] not in frontend_argos_supported_files_format:
abort(400, description="Invalid request: file format not supported")
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
try:
filename = str(uuid.uuid4()) + '.' + secure_filename(file.filename)
filepath = os.path.join(get_upload_dir(), filename)
file.save(filepath)
translated_file_path = argostranslatefiles.translate_file(src_langs[0].get_translation(tgt_lang), filepath)
translated_filename = os.path.basename(translated_file_path)
return jsonify(
{
"translatedFileUrl": url_for('download_file', filename=translated_filename, _external=True)
}
)
except Exception as e:
abort(500, description=e)
@app.route("/download_file/<string:filename>", methods=["GET"])
@access_check
def download_file(filename: str):
"""
Download a translated file
"""
if args.disable_files_translation:
abort(400, description="Files translation are disabled on this server.")
filepath = os.path.join(get_upload_dir(), filename)
try:
checked_filepath = security.path_traversal_check(filepath, get_upload_dir())
if os.path.isfile(checked_filepath):
filepath = checked_filepath
except security.SuspiciousFileOperation:
abort(400, description="Invalid filename")
return_data = io.BytesIO()
with open(filepath, 'rb') as fo:
return_data.write(fo.read())
return_data.seek(0)
download_filename = filename.split('.')
download_filename.pop(0)
download_filename = '.'.join(download_filename)
return send_file(return_data, as_attachment=True, attachment_filename=download_filename)
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
supportedFilesFormat:
type: array
items:
type: string
description: Supported files format
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"filesTranslation": not args.disable_files_translation,
"supportedFilesFormat": [] if args.disable_files_translation else frontend_argos_supported_files_format,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.3.0"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | Fixed with https://github.com/LibreTranslate/LibreTranslate/pull/157/commits/a1244b9e3eb34586b549fc421dfb06d5cba452c6 | pierotofy | 11 |
LibreTranslate/LibreTranslate | 157 | [WIP] Add files translation | Add files translation with [https://github.com/dingedi/argos-translate-files](https://github.com/dingedi/argos-translate-files) | null | 2021-10-24 10:54:56+00:00 | 2021-10-26 20:06:59+00:00 | app/app.py | import os
from functools import wraps
import pkg_resources
from flask import Flask, abort, jsonify, render_template, request
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from app import flood
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
from translatehtml import translate_html
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier = 1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=pkg_resources.require("LibreTranslate")[0].version
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.2.1"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| import io
import os
import tempfile
import uuid
from functools import wraps
import argostranslatefiles
from argostranslatefiles import get_supported_formats
from flask import Flask, abort, jsonify, render_template, request, url_for, send_file
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from translatehtml import translate_html
from werkzeug.utils import secure_filename
from app import flood, remove_translated_files, security
from app.language import detect_languages, transliterate
from .api_keys import Database
from .suggestions import Database as SuggestionsDatabase
def get_version():
try:
with open("VERSION") as f:
return f.read().strip()
except:
return "?"
def get_upload_dir():
upload_dir = os.path.join(tempfile.gettempdir(), "libretranslate-files-translate")
if not os.path.isdir(upload_dir):
os.mkdir(upload_dir)
return upload_dir
def get_json_dict(request):
d = request.get_json()
if not isinstance(d, dict):
abort(400, description="Invalid JSON format")
return d
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0].split(",")[0]
else:
ip = request.remote_addr or "127.0.0.1"
return ip
def get_req_limits(default_limit, api_keys_db, multiplier=1):
req_limit = default_limit
if api_keys_db:
if request.is_json:
json = get_json_dict(request)
api_key = json.get("api_key")
else:
api_key = request.values.get("api_key")
if api_key:
db_req_limit = api_keys_db.lookup(api_key)
if db_req_limit is not None:
req_limit = db_req_limit * multiplier
return req_limit
def get_routes_limits(default_req_limit, daily_req_limit, api_keys_db):
if default_req_limit == -1:
# TODO: better way?
default_req_limit = 9999999999999
def minute_limits():
return "%s per minute" % get_req_limits(default_req_limit, api_keys_db)
def daily_limits():
return "%s per day" % get_req_limits(daily_req_limit, api_keys_db, 1440)
res = [minute_limits]
if daily_req_limit > 0:
res.append(daily_limits)
return res
def create_app(args):
from app.init import boot
boot(args.load_only)
from app.language import languages
app = Flask(__name__)
if args.debug:
app.config["TEMPLATES_AUTO_RELOAD"] = True
if not args.disable_files_translation:
remove_translated_files.setup(get_upload_dir())
# Map userdefined frontend languages to argos language object.
if args.frontend_language_source == "auto":
frontend_argos_language_source = type(
"obj", (object,), {"code": "auto", "name": "Auto Detect"}
)
else:
frontend_argos_language_source = next(
iter([l for l in languages if l.code == args.frontend_language_source]),
None,
)
frontend_argos_language_target = next(
iter([l for l in languages if l.code == args.frontend_language_target]), None
)
frontend_argos_supported_files_format = []
for file_format in get_supported_formats():
for ff in file_format.supported_file_extensions:
frontend_argos_supported_files_format.append(ff)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(
f"{args.frontend_language_source} as frontend source language is not supported."
)
if frontend_argos_language_target is None:
raise AttributeError(
f"{args.frontend_language_target} as frontend target language is not supported."
)
api_keys_db = None
if args.req_limit > 0 or args.api_keys or args.daily_req_limit > 0:
api_keys_db = Database() if args.api_keys else None
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=get_routes_limits(
args.req_limit, args.daily_req_limit, api_keys_db
),
)
else:
from .no_limiter import Limiter
limiter = Limiter()
if args.req_flood_threshold > 0:
flood.setup(args.req_flood_threshold)
def access_check(f):
@wraps(f)
def func(*a, **kw):
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if args.api_keys and args.require_api_key_origin:
if request.is_json:
json = get_json_dict(request)
ak = json.get("api_key")
else:
ak = request.values.get("api_key")
if (
api_keys_db.lookup(ak) is None and request.headers.get("Origin") != args.require_api_key_origin
):
abort(
403,
description="Please contact the server operator to obtain an API key",
)
return f(*a, **kw)
return func
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
flood.report(get_remote_address())
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.errorhandler(403)
def denied(e):
return jsonify({"error": str(e.description)}), 403
@app.route("/")
@limiter.exempt
def index():
return render_template(
"index.html",
gaId=args.ga_id,
frontendTimeout=args.frontend_timeout,
api_keys=args.api_keys,
web_version=os.environ.get("LT_WEB") is not None,
version=get_version()
)
@app.route("/javascript-licenses", methods=["GET"])
@limiter.exempt
def javascript_licenses():
return render_template("javascript-licenses.html")
@app.route("/languages", methods=["GET", "POST"])
@limiter.exempt
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{"code": l.code, "name": l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add(
"Access-Control-Allow-Headers", "Authorization, Content-Type"
)
response.headers.add("Access-Control-Expose-Headers", "Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET, POST")
response.headers.add("Access-Control-Allow-Credentials", "true")
response.headers.add("Access-Control-Max-Age", 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=["POST"])
@access_check
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
oneOf:
- type: string
example: Hello world!
- type: array
example: ['Hello world!']
required: true
description: Text(s) to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: format
schema:
type: string
enum: [text, html]
default: text
example: text
required: false
description: >
Format of source text:
* `text` - Plain text
* `html` - HTML markup
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
oneOf:
- type: string
- type: array
description: Translated text(s)
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
source_lang = json.get("source")
target_lang = json.get("target")
text_format = json.get("format")
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
text_format = request.values.get("format")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
batch = isinstance(q, list)
if batch and args.batch_limit != -1:
batch_size = len(q)
if args.batch_limit < batch_size:
abort(
400,
description="Invalid request: Request (%d) exceeds text limit (%d)"
% (batch_size, args.batch_limit),
)
if args.char_limit != -1:
if batch:
chars = sum([len(text) for text in q])
else:
chars = len(q)
if args.char_limit < chars:
abort(
400,
description="Invalid request: Request (%d) exceeds character limit (%d)"
% (chars, args.char_limit),
)
if source_lang == "auto":
source_langs = []
if batch:
auto_detect_texts = q
else:
auto_detect_texts = [q]
overall_candidates = detect_languages(q)
for text_to_check in auto_detect_texts:
if len(text_to_check) > 40:
candidate_langs = detect_languages(text_to_check)
else:
# Unable to accurately detect languages for short texts
candidate_langs = overall_candidates
source_langs.append(candidate_langs[0]["language"])
if args.debug:
print(text_to_check, candidate_langs)
print("Auto detected: %s" % candidate_langs[0]["language"])
else:
if batch:
source_langs = [source_lang for text in q]
else:
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
if not text_format:
text_format = "text"
if text_format not in ["text", "html"]:
abort(400, description="%s format is not supported" % text_format)
try:
if batch:
results = []
for idx, text in enumerate(q):
translator = src_langs[idx].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, text))
else:
translated_text = translator.translate(transliterate(text, target_lang=source_langs[idx]))
results.append(translated_text)
return jsonify(
{
"translatedText": results
}
)
else:
translator = src_langs[0].get_translation(tgt_lang)
if text_format == "html":
translated_text = str(translate_html(translator, q))
else:
translated_text = translator.translate(transliterate(q, target_lang=source_langs[0]))
return jsonify(
{
"translatedText": translated_text
}
)
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/translate_file", methods=["POST"])
@access_check
def translate_file():
"""
Translate file from a language to another
---
tags:
- translate
consumes:
- multipart/form-data
parameters:
- in: formData
name: file
type: file
required: true
description: File to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Translated file
schema:
id: translate
type: object
properties:
translatedFileUrl:
type: string
description: Translated file url
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if args.disable_files_translation:
abort(403, description="Files translation are disabled on this server.")
source_lang = request.form.get("source")
target_lang = request.form.get("target")
file = request.files['file']
if not file:
abort(400, description="Invalid request: missing file parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if file.filename == '':
abort(400, description="Invalid request: empty file")
if os.path.splitext(file.filename)[1] not in frontend_argos_supported_files_format:
abort(400, description="Invalid request: file format not supported")
source_langs = [source_lang]
src_langs = [next(iter([l for l in languages if l.code == source_lang]), None) for source_lang in source_langs]
for idx, lang in enumerate(src_langs):
if lang is None:
abort(400, description="%s is not supported" % source_langs[idx])
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
try:
filename = str(uuid.uuid4()) + '.' + secure_filename(file.filename)
filepath = os.path.join(get_upload_dir(), filename)
file.save(filepath)
translated_file_path = argostranslatefiles.translate_file(src_langs[0].get_translation(tgt_lang), filepath)
translated_filename = os.path.basename(translated_file_path)
return jsonify(
{
"translatedFileUrl": url_for('download_file', filename=translated_filename, _external=True)
}
)
except Exception as e:
abort(500, description=e)
@app.route("/download_file/<string:filename>", methods=["GET"])
@access_check
def download_file(filename: str):
"""
Download a translated file
"""
if args.disable_files_translation:
abort(400, description="Files translation are disabled on this server.")
filepath = os.path.join(get_upload_dir(), filename)
try:
checked_filepath = security.path_traversal_check(filepath, get_upload_dir())
if os.path.isfile(checked_filepath):
filepath = checked_filepath
except security.SuspiciousFileOperation:
abort(400, description="Invalid filename")
return_data = io.BytesIO()
with open(filepath, 'rb') as fo:
return_data.write(fo.read())
return_data.seek(0)
download_filename = filename.split('.')
download_filename.pop(0)
download_filename = '.'.join(download_filename)
return send_file(return_data, as_attachment=True, attachment_filename=download_filename)
@app.route("/detect", methods=["POST"])
@access_check
def detect():
"""
Detect the language of a single text
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to detect
- in: formData
name: api_key
schema:
type: string
example: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
required: false
description: API key
responses:
200:
description: Detections
schema:
id: detections
type: array
items:
type: object
properties:
confidence:
type: number
format: float
minimum: 0
maximum: 1
description: Confidence value
example: 0.6
language:
type: string
description: Language code
example: en
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Detection error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
403:
description: Banned
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if flood.is_banned(get_remote_address()):
abort(403, description="Too many request limits violations")
if request.is_json:
json = get_json_dict(request)
q = json.get("q")
else:
q = request.values.get("q")
if not q:
abort(400, description="Invalid request: missing q parameter")
return jsonify(detect_languages(q))
@app.route("/frontend/settings")
@limiter.exempt
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
frontendTimeout:
type: integer
description: Frontend translation timeout
suggestions:
type: boolean
description: Whether submitting suggestions is enabled.
supportedFilesFormat:
type: array
items:
type: string
description: Supported files format
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify(
{
"charLimit": args.char_limit,
"frontendTimeout": args.frontend_timeout,
"suggestions": args.suggestions,
"filesTranslation": not args.disable_files_translation,
"supportedFilesFormat": [] if args.disable_files_translation else frontend_argos_supported_files_format,
"language": {
"source": {
"code": frontend_argos_language_source.code,
"name": frontend_argos_language_source.name,
},
"target": {
"code": frontend_argos_language_target.code,
"name": frontend_argos_language_target.name,
},
},
}
)
@app.route("/suggest", methods=["POST"])
@limiter.exempt
def suggest():
"""
Submit a suggestion to improve a translation
---
tags:
- feedback
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Original text
- in: formData
name: s
schema:
type: string
example: ¡Hola mundo!
required: true
description: Suggested translation
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Language of original text
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Language of suggested translation
responses:
200:
description: Success
schema:
id: suggest-response
type: object
properties:
success:
type: boolean
description: Whether submission was successful
403:
description: Not authorized
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
"""
if not args.suggestions:
abort(403, description="Suggestions are disabled on this server.")
q = request.values.get("q")
s = request.values.get("s")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
SuggestionsDatabase().add(q, s, source_lang, target_lang)
return jsonify({"success": True})
swag = swagger(app)
swag["info"]["version"] = "1.3.0"
swag["info"]["title"] = "LibreTranslate"
@app.route("/spec")
@limiter.exempt
def spec():
return jsonify(swag)
SWAGGER_URL = "/docs" # URL for exposing Swagger UI (without trailing '/')
API_URL = "/spec"
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(SWAGGER_URL, API_URL)
app.register_blueprint(swaggerui_blueprint)
return app
| dingedi | 18ea0bae91306422dd6a8009ac06366664f7fa6e | 7727d8ddc3bd854edd0d7144cd1e0e1e902106bd | Actually, perhaps this wasn't even an issue, because of the `string:` decorator in the flask route. But shouldn't hurt to check. | pierotofy | 12 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="en", frontend_language_target="es"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source]), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source or l.code == 'auto']), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
original_source_lang = source_lang
if source_lang == 'auto':
source_lang = detect(q)
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None and original_source_lang == 'auto':
return jsonify({"translatedText": "Detected language not supported (" + source_lang + ")" })
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | This line is just a backup if this app is used outside of main.py context
The frontend_language_source should match with the default arg here.
https://github.com/uav4geo/LibreTranslate/blob/9bf3eabb6ea2de400611f047ee952f7a919f95e7/main.py#L19
| worldworm | 13 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="en", frontend_language_target="es"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source]), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source or l.code == 'auto']), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
original_source_lang = source_lang
if source_lang == 'auto':
source_lang = detect(q)
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None and original_source_lang == 'auto':
return jsonify({"translatedText": "Detected language not supported (" + source_lang + ")" })
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | This is not working for me. It is always tripping the AttributeError here
https://github.com/uav4geo/LibreTranslate/blob/9bf3eabb6ea2de400611f047ee952f7a919f95e7/app/app.py#L28 | worldworm | 14 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="en", frontend_language_target="es"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source]), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source or l.code == 'auto']), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
original_source_lang = source_lang
if source_lang == 'auto':
source_lang = detect(q)
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None and original_source_lang == 'auto':
return jsonify({"translatedText": "Detected language not supported (" + source_lang + ")" })
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | I think a duplicate var here is not needed? I'm not entirely sure, though. | worldworm | 15 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="en", frontend_language_target="es"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source]), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source or l.code == 'auto']), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
original_source_lang = source_lang
if source_lang == 'auto':
source_lang = detect(q)
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None and original_source_lang == 'auto':
return jsonify({"translatedText": "Detected language not supported (" + source_lang + ")" })
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | Wouldn't it be better to throw an http error code here? So you could also show a matching error message in the frontend or handle it in an application that is using this api.
Otherwise, there is a risk that a user will see this as a translation because a successful 200 is coming.
| worldworm | 16 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="en", frontend_language_target="es"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source]), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source or l.code == 'auto']), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
original_source_lang = source_lang
if source_lang == 'auto':
source_lang = detect(q)
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None and original_source_lang == 'auto':
return jsonify({"translatedText": "Detected language not supported (" + source_lang + ")" })
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | Needed once later to check if it is originally `auto` (allows me to keep `source_lang` as it is without too much changes)
| vemonet | 17 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/app.py | from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="en", frontend_language_target="es"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source]), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| from flask import Flask, render_template, jsonify, request, abort, send_from_directory
from flask_swagger import swagger
from flask_swagger_ui import get_swaggerui_blueprint
from langdetect import detect
def get_remote_address():
if request.headers.getlist("X-Forwarded-For"):
ip = request.headers.getlist("X-Forwarded-For")[0]
else:
ip = request.remote_addr or '127.0.0.1'
return ip
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
from app.init import boot
boot()
from app.language import languages
app = Flask(__name__)
if debug:
app.config['TEMPLATES_AUTO_RELOAD'] = True
# Map userdefined frontend languages to argos language object.
frontend_argos_language_source = next(iter([l for l in languages if l.code == frontend_language_source or l.code == 'auto']), None)
frontend_argos_language_target = next(iter([l for l in languages if l.code == frontend_language_target]), None)
# Raise AttributeError to prevent app startup if user input is not valid.
if frontend_argos_language_source is None:
raise AttributeError(f"{frontend_language_source} as frontend source language is not supported.")
if frontend_argos_language_target is None:
raise AttributeError(f"{frontend_language_target} as frontend target language is not supported.")
if req_limit > 0:
from flask_limiter import Limiter
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["%s per minute" % req_limit]
)
@app.errorhandler(400)
def invalid_api(e):
return jsonify({"error": str(e.description)}), 400
@app.errorhandler(500)
def server_error(e):
return jsonify({"error": str(e.description)}), 500
@app.errorhandler(429)
def slow_down_error(e):
return jsonify({"error": "Slowdown: " + str(e.description)}), 429
@app.route("/")
def index():
return render_template('index.html', gaId=ga_id)
@app.route("/languages")
def langs():
"""
Retrieve list of supported languages
---
tags:
- translate
responses:
200:
description: List of languages
schema:
id: languages
type: array
items:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
return jsonify([{'code': l.code, 'name': l.name} for l in languages])
# Add cors
@app.after_request
def after_request(response):
response.headers.add('Access-Control-Allow-Origin','*')
response.headers.add('Access-Control-Allow-Headers', "Authorization, Content-Type")
response.headers.add('Access-Control-Expose-Headers', "Authorization")
response.headers.add('Access-Control-Allow-Methods', "GET, POST")
response.headers.add('Access-Control-Allow-Credentials', "true")
response.headers.add('Access-Control-Max-Age', 60 * 60 * 24 * 20)
return response
@app.route("/translate", methods=['POST'])
def translate():
"""
Translate text from a language to another
---
tags:
- translate
parameters:
- in: formData
name: q
schema:
type: string
example: Hello world!
required: true
description: Text to translate
- in: formData
name: source
schema:
type: string
example: en
required: true
description: Source language code
- in: formData
name: target
schema:
type: string
example: es
required: true
description: Target language code
responses:
200:
description: Translated text
schema:
id: translate
type: object
properties:
translatedText:
type: string
description: Translated text
400:
description: Invalid request
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
500:
description: Translation error
schema:
id: error-response
type: object
properties:
error:
type: string
description: Error message
429:
description: Slow down
schema:
id: error-slow-down
type: object
properties:
error:
type: string
description: Reason for slow down
"""
if request.is_json:
json = request.get_json()
q = json.get('q')
source_lang = json.get('source')
target_lang = json.get('target')
else:
q = request.values.get("q")
source_lang = request.values.get("source")
target_lang = request.values.get("target")
if not q:
abort(400, description="Invalid request: missing q parameter")
if not source_lang:
abort(400, description="Invalid request: missing source parameter")
if not target_lang:
abort(400, description="Invalid request: missing target parameter")
if char_limit != -1:
q = q[:char_limit]
original_source_lang = source_lang
if source_lang == 'auto':
source_lang = detect(q)
src_lang = next(iter([l for l in languages if l.code == source_lang]), None)
tgt_lang = next(iter([l for l in languages if l.code == target_lang]), None)
if src_lang is None and original_source_lang == 'auto':
return jsonify({"translatedText": "Detected language not supported (" + source_lang + ")" })
if src_lang is None:
abort(400, description="%s is not supported" % source_lang)
if tgt_lang is None:
abort(400, description="%s is not supported" % target_lang)
translator = src_lang.get_translation(tgt_lang)
try:
return jsonify({"translatedText": translator.translate(q) })
except Exception as e:
abort(500, description="Cannot translate text: %s" % str(e))
@app.route("/frontend/settings")
def frontend_settings():
"""
Retrieve frontend specific settings
---
tags:
- frontend
responses:
200:
description: frontend settings
schema:
id: frontend-settings
type: object
properties:
charLimit:
type: integer
description: Character input limit for this language (-1 indicates no limit)
language:
type: object
properties:
source:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
target:
type: object
properties:
code:
type: string
description: Language code
name:
type: string
description: Human-readable language name (in English)
"""
return jsonify({'charLimit': char_limit,
'language': {
'source': {'code': frontend_argos_language_source.code, 'name': frontend_argos_language_source.name},
'target': {'code': frontend_argos_language_target.code, 'name': frontend_argos_language_target.name}}
})
swag = swagger(app)
swag['info']['version'] = "1.0"
swag['info']['title'] = "LibreTranslate"
@app.route("/spec")
def spec():
return jsonify(swag)
SWAGGER_URL = '/docs' # URL for exposing Swagger UI (without trailing '/')
API_URL = '/spec'
# Call factory function to create our blueprint
swaggerui_blueprint = get_swaggerui_blueprint(
SWAGGER_URL,
API_URL
)
app.register_blueprint(swaggerui_blueprint)
return app
| vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | No, because LibreTranslate try to translate at every key push, so you are getting a lot of languages that are not supported before finishing typing. It was actually the main issue to take of with the UI. But this should be improved by asking for the list of most probable langs and using the first that we support
```
from langdetect import detect_langs
>>> detect_langs("Otec matka syn.")
[sk:0.572770823327, pl:0.292872522702, cs:0.134356653968]
``` | vemonet | 18 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="https://cdn.jsdelivr.net/npm/vue@2"></script>
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/themes/prism.min.css" rel="stylesheet" />
<style type="text/css">
textarea.materialize-textarea{height: 120px;}
.code{
font-size: 90%;
border-radius: 4px;
padding: 4px;
border: 1px solid #9e9e9e;
background: #fbfbfb;
overflow: auto;
font-family: monospace;
min-height: 280px;
width: 100%;
overflow: auto;
}
.progress.translate{
position: absolute;
}
.card.horizontal .card-stacked{
overflow: auto;
}
</style>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
</head>
<body>
<nav class="blue lighten-1" role="navigation">
<div class="nav-wrapper container"><a id="logo-container" href="/" class="brand-logo"><i class="material-icons">translate</i> LibreTranslate</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<a href="#" data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></a>
</div>
</nav>
<div id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<form class="col s12">
<div class="row">
<div class="input-field col s5">
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown"@change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s2 center">
<a href="javascript:void(0)" @click="swapLangs" class="waves-effect waves-teal btn-flat btn-large" style="margin-top: 8px;"><i class="material-icons">swap_horiz</i></a>
</div>
<div class="input-field col s5">
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row">
<div class="input-field col s6">
<textarea id="textarea1" class="materialize-textarea" v-model="inputText" @input="handleInput" ref="inputTextarea"></textarea>
<label for="textarea1">Input Text</label>
<div v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field col s6">
<div>
<textarea id="textarea2" class="materialize-textarea" v-model="translatedText" ref="translatedTextarea"></textarea>
<label for="textarea2"><div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div></label>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p>Request</p>
<p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre></p>
</div>
<div class="col s12 m12 l6 left-align">
<p>Response</p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation</h3>
<h4 class="header">100% Self-Hosted. No Limits. No Ties to Proprietary Services.</h4>
<br/><a class="waves-effect waves-light btn btn-large" href="https://github.com/uav4geo/LibreTranslate"><i class="material-icons left">cloud_download</i> Download</a>
<br/><br/><br/>
</div>
</div>
</div>
</div>
</div>
</div>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l6 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p class="grey-text text-lighten-4">
Made with ❤ by <a class="grey-text text-lighten-3" href="https://uav4geo.com">UAV4GEO</a> and powered by <a class="grey-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/">Argos Translate</a>
</p>
<p><a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html">License: AGPLv3</a></p>
</div>
<div class="col l4 offset-l2 s12">
<!-- <h5 class="white-text">Links</h5>
<ul>
<li><a class="grey-text text-lighten-3" href="#!">Link 1</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 2</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 3</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 4</a></li>
</ul> -->
<div class="container">
</div>
</div>
</div>
</div>
<div class="footer-copyright center">
</div>
</footer>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/js/materialize.min.js"></script>
<script>
window.Prism = window.Prism || {};
window.Prism.manual = true;
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/prism.min.js" ></script>
<script>
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
document.addEventListener('DOMContentLoaded', function(){
var elems = document.querySelectorAll('.sidenav');
var instances = M.Sidenav.init(elems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
translatedText: "",
output: "",
charactersLimit: -1,
},
mounted: function(){
var self = this;
var requestSettings = new XMLHttpRequest();
requestSettings.open('GET', BaseUrl + '/frontend/settings', true);
requestSettings.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
}else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
requestSettings.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
requestSettings.send();
var requestLanguages = new XMLHttpRequest();
requestLanguages.open('GET', BaseUrl + '/languages', true);
requestLanguages.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.langs = JSON.parse(this.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.loading = false;
} else {
self.error = "Cannot load /languages";
self.loading = false;
}
};
requestLanguages.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
requestLanguages.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = 150 + "px";
this.$refs.translatedTextarea.style.height = 150 + "px";
}else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(150, this.$refs.inputTextarea.scrollHeight) + "px";
this.$refs.translatedTextarea.style.height = Math.max(150, this.$refs.translatedTextarea.scrollHeight) + "px";
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: "' + this.$options.filters.escape(this.inputText) + '",',
' source: "' + this.$options.filters.escape(this.sourceLang) + '",',
' target: "' + this.$options.filters.escape(this.targetLang) + '"',
' }),',
' headers: {',
' "Content-Type": "application/json"}',
' });',
'',
'console.log(await res.json());'].join("\n");
}
},
filters: {
escape: function(v){
return v.replace('"', '\\\"');
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(){
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
handleInput: function(e){
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, 300);
}
}
});
});
</script>
</body>
</html> | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="https://cdn.jsdelivr.net/npm/vue@2"></script>
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/themes/prism.min.css" rel="stylesheet" />
<style type="text/css">
textarea.materialize-textarea{height: 120px;}
.code{
font-size: 90%;
border-radius: 4px;
padding: 4px;
border: 1px solid #9e9e9e;
background: #fbfbfb;
overflow: auto;
font-family: monospace;
min-height: 280px;
width: 100%;
overflow: auto;
}
.progress.translate{
position: absolute;
}
.card.horizontal .card-stacked{
overflow: auto;
}
</style>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
</head>
<body>
<nav class="blue lighten-1" role="navigation">
<div class="nav-wrapper container"><a id="logo-container" href="/" class="brand-logo"><i class="material-icons">translate</i> LibreTranslate</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<a href="#" data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></a>
</div>
</nav>
<div id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<form class="col s12">
<div class="row">
<div class="input-field col s5">
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown"@change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s2 center">
<a href="javascript:void(0)" @click="swapLangs" class="waves-effect waves-teal btn-flat btn-large" style="margin-top: 8px;"><i class="material-icons">swap_horiz</i></a>
</div>
<div class="input-field col s5">
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row">
<div class="input-field col s6">
<textarea id="textarea1" class="materialize-textarea" v-model="inputText" @input="handleInput" ref="inputTextarea"></textarea>
<label for="textarea1">Input Text</label>
<div v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field col s6">
<div>
<textarea id="textarea2" class="materialize-textarea" v-model="translatedText" ref="translatedTextarea"></textarea>
<label for="textarea2"><div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div></label>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p>Request</p>
<p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre></p>
</div>
<div class="col s12 m12 l6 left-align">
<p>Response</p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation</h3>
<h4 class="header">100% Self-Hosted. No Limits. No Ties to Proprietary Services.</h4>
<br/><a class="waves-effect waves-light btn btn-large" href="https://github.com/uav4geo/LibreTranslate"><i class="material-icons left">cloud_download</i> Download</a>
<br/><br/><br/>
</div>
</div>
</div>
</div>
</div>
</div>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l6 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p class="grey-text text-lighten-4">
Made with ❤ by <a class="grey-text text-lighten-3" href="https://uav4geo.com">UAV4GEO</a> and powered by <a class="grey-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/">Argos Translate</a>
</p>
<p><a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html">License: AGPLv3</a></p>
</div>
<div class="col l4 offset-l2 s12">
<!-- <h5 class="white-text">Links</h5>
<ul>
<li><a class="grey-text text-lighten-3" href="#!">Link 1</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 2</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 3</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 4</a></li>
</ul> -->
<div class="container">
</div>
</div>
</div>
</div>
<div class="footer-copyright center">
</div>
</footer>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/js/materialize.min.js"></script>
<script>
window.Prism = window.Prism || {};
window.Prism.manual = true;
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/prism.min.js" ></script>
<script>
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
document.addEventListener('DOMContentLoaded', function(){
var elems = document.querySelectorAll('.sidenav');
var instances = M.Sidenav.init(elems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
translatedText: "",
output: "",
charactersLimit: -1,
},
mounted: function(){
var self = this;
var requestSettings = new XMLHttpRequest();
requestSettings.open('GET', BaseUrl + '/frontend/settings', true);
requestSettings.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
}else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
requestSettings.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
requestSettings.send();
var requestLanguages = new XMLHttpRequest();
requestLanguages.open('GET', BaseUrl + '/languages', true);
requestLanguages.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.langs = JSON.parse(this.response);
self.langs.push({ name: 'Auto (experimental)', code: 'auto' })
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.loading = false;
} else {
self.error = "Cannot load /languages";
self.loading = false;
}
};
requestLanguages.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
requestLanguages.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = 150 + "px";
this.$refs.translatedTextarea.style.height = 150 + "px";
}else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(150, this.$refs.inputTextarea.scrollHeight) + "px";
this.$refs.translatedTextarea.style.height = Math.max(150, this.$refs.translatedTextarea.scrollHeight) + "px";
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: "' + this.$options.filters.escape(this.inputText) + '",',
' source: "' + this.$options.filters.escape(this.sourceLang) + '",',
' target: "' + this.$options.filters.escape(this.targetLang) + '"',
' }),',
' headers: {',
' "Content-Type": "application/json"}',
' });',
'',
'console.log(await res.json());'].join("\n");
}
},
filters: {
escape: function(v){
return v.replace('"', '\\\"');
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(){
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
handleInput: function(e){
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, 300);
}
}
});
});
</script>
</body>
</html> | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | This makes the auto option selectable only in the frontend. The api GET /languages does not contain the auto option.
Also auto is selectable as a target language | worldworm | 19 |
LibreTranslate/LibreTranslate | 12 | add support for auto language | Not perfect, it picks a language not supported sometimes
It could be improved by using the function to get the list of most probable languages, then iterate to get the first we support
I also needed to manually add `auto` to the list of languages after getting it from Argos Translate
And I don't know why but it does not seems the default language have been updated even with this change:
```python
def create_app(char_limit=-1, req_limit=-1, ga_id=None, debug=False, frontend_language_source="auto", frontend_language_target="en"):
``` | null | 2021-01-13 14:37:15+00:00 | 2021-01-15 16:36:43+00:00 | app/templates/index.html | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="https://cdn.jsdelivr.net/npm/vue@2"></script>
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/themes/prism.min.css" rel="stylesheet" />
<style type="text/css">
textarea.materialize-textarea{height: 120px;}
.code{
font-size: 90%;
border-radius: 4px;
padding: 4px;
border: 1px solid #9e9e9e;
background: #fbfbfb;
overflow: auto;
font-family: monospace;
min-height: 280px;
width: 100%;
overflow: auto;
}
.progress.translate{
position: absolute;
}
.card.horizontal .card-stacked{
overflow: auto;
}
</style>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
</head>
<body>
<nav class="blue lighten-1" role="navigation">
<div class="nav-wrapper container"><a id="logo-container" href="/" class="brand-logo"><i class="material-icons">translate</i> LibreTranslate</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<a href="#" data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></a>
</div>
</nav>
<div id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<form class="col s12">
<div class="row">
<div class="input-field col s5">
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown"@change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s2 center">
<a href="javascript:void(0)" @click="swapLangs" class="waves-effect waves-teal btn-flat btn-large" style="margin-top: 8px;"><i class="material-icons">swap_horiz</i></a>
</div>
<div class="input-field col s5">
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row">
<div class="input-field col s6">
<textarea id="textarea1" class="materialize-textarea" v-model="inputText" @input="handleInput" ref="inputTextarea"></textarea>
<label for="textarea1">Input Text</label>
<div v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field col s6">
<div>
<textarea id="textarea2" class="materialize-textarea" v-model="translatedText" ref="translatedTextarea"></textarea>
<label for="textarea2"><div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div></label>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p>Request</p>
<p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre></p>
</div>
<div class="col s12 m12 l6 left-align">
<p>Response</p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation</h3>
<h4 class="header">100% Self-Hosted. No Limits. No Ties to Proprietary Services.</h4>
<br/><a class="waves-effect waves-light btn btn-large" href="https://github.com/uav4geo/LibreTranslate"><i class="material-icons left">cloud_download</i> Download</a>
<br/><br/><br/>
</div>
</div>
</div>
</div>
</div>
</div>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l6 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p class="grey-text text-lighten-4">
Made with ❤ by <a class="grey-text text-lighten-3" href="https://uav4geo.com">UAV4GEO</a> and powered by <a class="grey-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/">Argos Translate</a>
</p>
<p><a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html">License: AGPLv3</a></p>
</div>
<div class="col l4 offset-l2 s12">
<!-- <h5 class="white-text">Links</h5>
<ul>
<li><a class="grey-text text-lighten-3" href="#!">Link 1</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 2</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 3</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 4</a></li>
</ul> -->
<div class="container">
</div>
</div>
</div>
</div>
<div class="footer-copyright center">
</div>
</footer>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/js/materialize.min.js"></script>
<script>
window.Prism = window.Prism || {};
window.Prism.manual = true;
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/prism.min.js" ></script>
<script>
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
document.addEventListener('DOMContentLoaded', function(){
var elems = document.querySelectorAll('.sidenav');
var instances = M.Sidenav.init(elems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
translatedText: "",
output: "",
charactersLimit: -1,
},
mounted: function(){
var self = this;
var requestSettings = new XMLHttpRequest();
requestSettings.open('GET', BaseUrl + '/frontend/settings', true);
requestSettings.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
}else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
requestSettings.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
requestSettings.send();
var requestLanguages = new XMLHttpRequest();
requestLanguages.open('GET', BaseUrl + '/languages', true);
requestLanguages.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.langs = JSON.parse(this.response);
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.loading = false;
} else {
self.error = "Cannot load /languages";
self.loading = false;
}
};
requestLanguages.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
requestLanguages.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = 150 + "px";
this.$refs.translatedTextarea.style.height = 150 + "px";
}else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(150, this.$refs.inputTextarea.scrollHeight) + "px";
this.$refs.translatedTextarea.style.height = Math.max(150, this.$refs.translatedTextarea.scrollHeight) + "px";
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: "' + this.$options.filters.escape(this.inputText) + '",',
' source: "' + this.$options.filters.escape(this.sourceLang) + '",',
' target: "' + this.$options.filters.escape(this.targetLang) + '"',
' }),',
' headers: {',
' "Content-Type": "application/json"}',
' });',
'',
'console.log(await res.json());'].join("\n");
}
},
filters: {
escape: function(v){
return v.replace('"', '\\\"');
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(){
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
handleInput: function(e){
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, 300);
}
}
});
});
</script>
</body>
</html> | <!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LibreTranslate - Free and Open Source Machine Translation API</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="description" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes.">
<meta name="keywords" content="translation,api">
<meta property="og:title" content="LibreTranslate - Free and Open Source Machine Translation API" />
<meta property="og:type" content="website" />
<meta property="og:url" content="https://libretranslate.com" />
<meta property="og:image" content="https://user-images.githubusercontent.com/1951843/102724116-32a6df00-42db-11eb-8cc0-129ab39cdfb5.png" />
<meta property="og:description" name="description" class="swiftype" content="Free and Open Source Machine Translation API. 100% self-hosted, no limits, no ties to proprietary services. Run your own API server in just a few minutes."/>
<script src="https://cdn.jsdelivr.net/npm/vue@2"></script>
<!-- Compiled and minified CSS -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/css/materialize.min.css">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<link href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/themes/prism.min.css" rel="stylesheet" />
<style type="text/css">
textarea.materialize-textarea{height: 120px;}
.code{
font-size: 90%;
border-radius: 4px;
padding: 4px;
border: 1px solid #9e9e9e;
background: #fbfbfb;
overflow: auto;
font-family: monospace;
min-height: 280px;
width: 100%;
overflow: auto;
}
.progress.translate{
position: absolute;
}
.card.horizontal .card-stacked{
overflow: auto;
}
</style>
{% if gaId %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ gaId }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ gaId }}');
</script>
{% endif %}
</head>
<body>
<nav class="blue lighten-1" role="navigation">
<div class="nav-wrapper container"><a id="logo-container" href="/" class="brand-logo"><i class="material-icons">translate</i> LibreTranslate</a>
<ul class="right hide-on-med-and-down">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<ul id="nav-mobile" class="sidenav">
<li><a href="/docs">API Docs</a></li>
<li><a href="https://github.com/uav4geo/LibreTranslate">GitHub</a></li>
</ul>
<a href="#" data-target="nav-mobile" class="sidenav-trigger"><i class="material-icons">menu</i></a>
</div>
</nav>
<div id="app">
<div class="section no-pad-bot center" v-if="loading">
<div class="container">
<div class="row">
<div class="preloader-wrapper active">
<div class="spinner-layer spinner-blue-only">
<div class="circle-clipper left">
<div class="circle"></div>
</div><div class="gap-patch">
<div class="circle"></div>
</div><div class="circle-clipper right">
<div class="circle"></div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else-if="error">
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<div class="col s12 m7">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<i class="material-icons">warning</i><p> [[ error ]]</p>
</div>
<div class="card-action">
<a href="#" @click="dismissError">Dismiss</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div v-else>
<div class="section no-pad-bot">
<div class="container">
<div class="row">
<h3 class="header center">Translation API</h3>
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<form class="col s12">
<div class="row">
<div class="input-field col s5">
<select class="browser-default" v-model="sourceLang" ref="sourceLangDropdown"@change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
<div class="col s2 center">
<a href="javascript:void(0)" @click="swapLangs" class="waves-effect waves-teal btn-flat btn-large" style="margin-top: 8px;"><i class="material-icons">swap_horiz</i></a>
</div>
<div class="input-field col s5">
<select class="browser-default" v-model="targetLang" ref="targetLangDropdown" @change="handleInput">
<template v-for="option in langs">
<option :value="option.code">[[ option.name ]]</option>
</template>
</select>
</div>
</div>
<div class="row">
<div class="input-field col s6">
<textarea id="textarea1" class="materialize-textarea" v-model="inputText" @input="handleInput" ref="inputTextarea"></textarea>
<label for="textarea1">Input Text</label>
<div v-if="charactersLimit !== -1">
<label>[[ inputText.length ]] / [[ charactersLimit ]]</label>
</div>
</div>
<div class="input-field col s6">
<div>
<textarea id="textarea2" class="materialize-textarea" v-model="translatedText" ref="translatedTextarea"></textarea>
<label for="textarea2"><div class="progress translate" v-if="loadingTranslation">
<div class="indeterminate"></div>
</div></label>
</div>
</div>
</div>
</form>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<div class="card horizontal">
<div class="card-stacked">
<div class="card-content">
<div class="row center">
<div class="col s12 m12 l6 left-align">
<p>Request</p>
<p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(requestCode)">
</code></pre></p>
</div>
<div class="col s12 m12 l6 left-align">
<p>Response</p>
<pre class="code"><code class="language-javascript" v-html="$options.filters.highlight(output)">
</code></pre>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section no-pad-bot" id="index-banner">
<div class="container">
<div class="row center">
<div class="col s12 m12">
<h3 class="header">Open Source Machine Translation</h3>
<h4 class="header">100% Self-Hosted. No Limits. No Ties to Proprietary Services.</h4>
<br/><a class="waves-effect waves-light btn btn-large" href="https://github.com/uav4geo/LibreTranslate"><i class="material-icons left">cloud_download</i> Download</a>
<br/><br/><br/>
</div>
</div>
</div>
</div>
</div>
</div>
<footer class="page-footer blue darken-3">
<div class="container">
<div class="row">
<div class="col l6 s12">
<h5 class="white-text">LibreTranslate</h5>
<p class="grey-text text-lighten-4">Free and Open Source Machine Translation API</p>
<p class="grey-text text-lighten-4">
Made with ❤ by <a class="grey-text text-lighten-3" href="https://uav4geo.com">UAV4GEO</a> and powered by <a class="grey-text text-lighten-3" href="https://github.com/argosopentech/argos-translate/">Argos Translate</a>
</p>
<p><a class="grey-text text-lighten-4" href="https://www.gnu.org/licenses/agpl-3.0.en.html">License: AGPLv3</a></p>
</div>
<div class="col l4 offset-l2 s12">
<!-- <h5 class="white-text">Links</h5>
<ul>
<li><a class="grey-text text-lighten-3" href="#!">Link 1</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 2</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 3</a></li>
<li><a class="grey-text text-lighten-3" href="#!">Link 4</a></li>
</ul> -->
<div class="container">
</div>
</div>
</div>
</div>
<div class="footer-copyright center">
</div>
</footer>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/1.0.0/js/materialize.min.js"></script>
<script>
window.Prism = window.Prism || {};
window.Prism.manual = true;
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.22.0/prism.min.js" ></script>
<script>
// API host/endpoint
var BaseUrl = window.location.protocol + "//" + window.location.host;
document.addEventListener('DOMContentLoaded', function(){
var elems = document.querySelectorAll('.sidenav');
var instances = M.Sidenav.init(elems);
var app = new Vue({
el: '#app',
delimiters: ['[[',']]'],
data: {
BaseUrl: BaseUrl,
loading: true,
error: "",
langs: [],
settings: {},
sourceLang: "",
targetLang: "",
loadingTranslation: false,
inputText: "",
translatedText: "",
output: "",
charactersLimit: -1,
},
mounted: function(){
var self = this;
var requestSettings = new XMLHttpRequest();
requestSettings.open('GET', BaseUrl + '/frontend/settings', true);
requestSettings.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.settings = JSON.parse(this.response);
self.sourceLang = self.settings.language.source.code;
self.targetLang = self.settings.language.target.code;
self.charactersLimit = self.settings.charLimit;
}else {
self.error = "Cannot load /frontend/settings";
self.loading = false;
}
};
requestSettings.onerror = function() {
self.error = "Error while calling /frontend/settings";
self.loading = false;
};
requestSettings.send();
var requestLanguages = new XMLHttpRequest();
requestLanguages.open('GET', BaseUrl + '/languages', true);
requestLanguages.onload = function() {
if (this.status >= 200 && this.status < 400) {
// Success!
self.langs = JSON.parse(this.response);
self.langs.push({ name: 'Auto (experimental)', code: 'auto' })
if (self.langs.length === 0){
self.loading = false;
self.error = "No languages available. Did you install the models correctly?"
return;
}
self.loading = false;
} else {
self.error = "Cannot load /languages";
self.loading = false;
}
};
requestLanguages.onerror = function() {
self.error = "Error while calling /languages";
self.loading = false;
};
requestLanguages.send();
},
updated: function(){
M.FormSelect.init(this.$refs.sourceLangDropdown);
M.FormSelect.init(this.$refs.targetLangDropdown);
if (this.inputText === ""){
this.$refs.inputTextarea.style.height = 150 + "px";
this.$refs.translatedTextarea.style.height = 150 + "px";
}else{
this.$refs.inputTextarea.style.height = this.$refs.translatedTextarea.style.height = "1px";
this.$refs.inputTextarea.style.height = Math.max(150, this.$refs.inputTextarea.scrollHeight) + "px";
this.$refs.translatedTextarea.style.height = Math.max(150, this.$refs.translatedTextarea.scrollHeight) + "px";
}
if (this.charactersLimit !== -1 && this.inputText.length >= this.charactersLimit){
this.inputText = this.inputText.substring(0, this.charactersLimit);
}
},
computed: {
requestCode: function(){
return ['const res = await fetch("' + this.BaseUrl + '/translate", {',
' method: "POST",',
' body: JSON.stringify({',
' q: "' + this.$options.filters.escape(this.inputText) + '",',
' source: "' + this.$options.filters.escape(this.sourceLang) + '",',
' target: "' + this.$options.filters.escape(this.targetLang) + '"',
' }),',
' headers: {',
' "Content-Type": "application/json"}',
' });',
'',
'console.log(await res.json());'].join("\n");
}
},
filters: {
escape: function(v){
return v.replace('"', '\\\"');
},
highlight: function(v){
return Prism.highlight(v, Prism.languages.javascript, 'javascript');
}
},
methods: {
abortPreviousTransRequest: function(){
if (this.transRequest){
this.transRequest.abort();
this.transRequest = null;
}
},
swapLangs: function(){
var t = this.sourceLang;
this.sourceLang = this.targetLang;
this.targetLang = t;
this.inputText = this.translatedText;
this.translatedText = "";
this.handleInput();
},
dismissError: function(){
this.error = '';
},
handleInput: function(e){
if (this.timeout) clearTimeout(this.timeout);
this.timeout = null;
if (this.inputText === ""){
this.translatedText = "";
this.output = "";
this.abortPreviousTransRequest();
this.loadingTranslation = false;
return;
}
var self = this;
self.loadingTranslation = true;
this.timeout = setTimeout(function(){
self.abortPreviousTransRequest();
var request = new XMLHttpRequest();
self.transRequest = request;
var data = new FormData();
data.append("q", self.inputText);
data.append("source", self.sourceLang);
data.append("target", self.targetLang);
request.open('POST', BaseUrl + '/translate', true);
request.onload = function() {
try{
var res = JSON.parse(this.response);
// Success!
if (res.translatedText !== undefined){
self.translatedText = res.translatedText;
self.loadingTranslation = false;
self.output = JSON.stringify(res, null, 4);
}else{
throw new Error(res.error || "Unknown error");
}
}catch(e){
self.error = e.message;
self.loadingTranslation = false;
}
};
request.onerror = function() {
self.error = "Error while calling /translate";
self.loadingTranslation = false;
};
request.send(data);
}, 300);
}
}
});
});
</script>
</body>
</html> | vemonet | 9bf3eabb6ea2de400611f047ee952f7a919f95e7 | 06b3c12ff6e49c7b2e2c1cb388c1e8068196d909 | Yep, the issue is that LibreTranslate use the list loaded by Argos, and I don't want to change this Argos list, so we need to add `auto` manually in the UI and API lists (got to do the last one, thanks for the notice!) | vemonet | 20 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | alembic/op.pyi | # ### this file stubs are generated by tools/write_pyi.py - do not edit ###
# ### imports are manually managed
from __future__ import annotations
from contextlib import contextmanager
from typing import Any
from typing import Awaitable
from typing import Callable
from typing import Dict
from typing import Iterator
from typing import List
from typing import Literal
from typing import Mapping
from typing import Optional
from typing import Sequence
from typing import Tuple
from typing import Type
from typing import TYPE_CHECKING
from typing import TypeVar
from typing import Union
from sqlalchemy.sql.expression import TableClause
from sqlalchemy.sql.expression import Update
if TYPE_CHECKING:
from sqlalchemy.engine import Connection
from sqlalchemy.sql.elements import ColumnElement
from sqlalchemy.sql.elements import conv
from sqlalchemy.sql.elements import TextClause
from sqlalchemy.sql.functions import Function
from sqlalchemy.sql.schema import Column
from sqlalchemy.sql.schema import Computed
from sqlalchemy.sql.schema import Identity
from sqlalchemy.sql.schema import SchemaItem
from sqlalchemy.sql.schema import Table
from sqlalchemy.sql.type_api import TypeEngine
from sqlalchemy.util import immutabledict
from .operations.ops import BatchOperations
from .operations.ops import MigrateOperation
from .runtime.migration import MigrationContext
from .util.sqla_compat import _literal_bindparam
_T = TypeVar("_T")
### end imports ###
def add_column(
table_name: str, column: Column[Any], *, schema: Optional[str] = None
) -> None:
"""Issue an "add column" instruction using the current
migration context.
e.g.::
from alembic import op
from sqlalchemy import Column, String
op.add_column("organization", Column("name", String()))
The :meth:`.Operations.add_column` method typically corresponds
to the SQL command "ALTER TABLE... ADD COLUMN". Within the scope
of this command, the column's name, datatype, nullability,
and optional server-generated defaults may be indicated.
.. note::
With the exception of NOT NULL constraints or single-column FOREIGN
KEY constraints, other kinds of constraints such as PRIMARY KEY,
UNIQUE or CHECK constraints **cannot** be generated using this
method; for these constraints, refer to operations such as
:meth:`.Operations.create_primary_key` and
:meth:`.Operations.create_check_constraint`. In particular, the
following :class:`~sqlalchemy.schema.Column` parameters are
**ignored**:
* :paramref:`~sqlalchemy.schema.Column.primary_key` - SQL databases
typically do not support an ALTER operation that can add
individual columns one at a time to an existing primary key
constraint, therefore it's less ambiguous to use the
:meth:`.Operations.create_primary_key` method, which assumes no
existing primary key constraint is present.
* :paramref:`~sqlalchemy.schema.Column.unique` - use the
:meth:`.Operations.create_unique_constraint` method
* :paramref:`~sqlalchemy.schema.Column.index` - use the
:meth:`.Operations.create_index` method
The provided :class:`~sqlalchemy.schema.Column` object may include a
:class:`~sqlalchemy.schema.ForeignKey` constraint directive,
referencing a remote table name. For this specific type of constraint,
Alembic will automatically emit a second ALTER statement in order to
add the single-column FOREIGN KEY constraint separately::
from alembic import op
from sqlalchemy import Column, INTEGER, ForeignKey
op.add_column(
"organization",
Column("account_id", INTEGER, ForeignKey("accounts.id")),
)
The column argument passed to :meth:`.Operations.add_column` is a
:class:`~sqlalchemy.schema.Column` construct, used in the same way it's
used in SQLAlchemy. In particular, values or functions to be indicated
as producing the column's default value on the database side are
specified using the ``server_default`` parameter, and not ``default``
which only specifies Python-side defaults::
from alembic import op
from sqlalchemy import Column, TIMESTAMP, func
# specify "DEFAULT NOW" along with the column add
op.add_column(
"account",
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
:param table_name: String name of the parent table.
:param column: a :class:`sqlalchemy.schema.Column` object
representing the new column.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def alter_column(
table_name: str,
column_name: str,
*,
nullable: Optional[bool] = None,
comment: Union[str, Literal[False], None] = False,
server_default: Any = False,
new_column_name: Optional[str] = None,
type_: Union[TypeEngine, Type[TypeEngine], None] = None,
existing_type: Union[TypeEngine, Type[TypeEngine], None] = None,
existing_server_default: Union[
str, bool, Identity, Computed, None
] = False,
existing_nullable: Optional[bool] = None,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
**kw: Any,
) -> None:
r"""Issue an "alter column" instruction using the
current migration context.
Generally, only that aspect of the column which
is being changed, i.e. name, type, nullability,
default, needs to be specified. Multiple changes
can also be specified at once and the backend should
"do the right thing", emitting each change either
separately or together as the backend allows.
MySQL has special requirements here, since MySQL
cannot ALTER a column without a full specification.
When producing MySQL-compatible migration files,
it is recommended that the ``existing_type``,
``existing_server_default``, and ``existing_nullable``
parameters be present, if not being altered.
Type changes which are against the SQLAlchemy
"schema" types :class:`~sqlalchemy.types.Boolean`
and :class:`~sqlalchemy.types.Enum` may also
add or drop constraints which accompany those
types on backends that don't support them natively.
The ``existing_type`` argument is
used in this case to identify and remove a previous
constraint that was bound to the type object.
:param table_name: string name of the target table.
:param column_name: string name of the target column,
as it exists before the operation begins.
:param nullable: Optional; specify ``True`` or ``False``
to alter the column's nullability.
:param server_default: Optional; specify a string
SQL expression, :func:`~sqlalchemy.sql.expression.text`,
or :class:`~sqlalchemy.schema.DefaultClause` to indicate
an alteration to the column's default value.
Set to ``None`` to have the default removed.
:param comment: optional string text of a new comment to add to the
column.
:param new_column_name: Optional; specify a string name here to
indicate the new name within a column rename operation.
:param type\_: Optional; a :class:`~sqlalchemy.types.TypeEngine`
type object to specify a change to the column's type.
For SQLAlchemy types that also indicate a constraint (i.e.
:class:`~sqlalchemy.types.Boolean`, :class:`~sqlalchemy.types.Enum`),
the constraint is also generated.
:param autoincrement: set the ``AUTO_INCREMENT`` flag of the column;
currently understood by the MySQL dialect.
:param existing_type: Optional; a
:class:`~sqlalchemy.types.TypeEngine`
type object to specify the previous type. This
is required for all MySQL column alter operations that
don't otherwise specify a new type, as well as for
when nullability is being changed on a SQL Server
column. It is also used if the type is a so-called
SQLlchemy "schema" type which may define a constraint (i.e.
:class:`~sqlalchemy.types.Boolean`,
:class:`~sqlalchemy.types.Enum`),
so that the constraint can be dropped.
:param existing_server_default: Optional; The existing
default value of the column. Required on MySQL if
an existing default is not being changed; else MySQL
removes the default.
:param existing_nullable: Optional; the existing nullability
of the column. Required on MySQL if the existing nullability
is not being changed; else MySQL sets this to NULL.
:param existing_autoincrement: Optional; the existing autoincrement
of the column. Used for MySQL's system of altering a column
that specifies ``AUTO_INCREMENT``.
:param existing_comment: string text of the existing comment on the
column to be maintained. Required on MySQL if the existing comment
on the column is not being changed.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param postgresql_using: String argument which will indicate a
SQL expression to render within the Postgresql-specific USING clause
within ALTER COLUMN. This string is taken directly as raw SQL which
must explicitly include any necessary quoting or escaping of tokens
within the expression.
"""
@contextmanager
def batch_alter_table(
table_name: str,
schema: Optional[str] = None,
recreate: Literal["auto", "always", "never"] = "auto",
partial_reordering: Optional[tuple] = None,
copy_from: Optional[Table] = None,
table_args: Tuple[Any, ...] = (),
table_kwargs: Mapping[str, Any] = immutabledict({}),
reflect_args: Tuple[Any, ...] = (),
reflect_kwargs: Mapping[str, Any] = immutabledict({}),
naming_convention: Optional[Dict[str, str]] = None,
) -> Iterator[BatchOperations]:
"""Invoke a series of per-table migrations in batch.
Batch mode allows a series of operations specific to a table
to be syntactically grouped together, and allows for alternate
modes of table migration, in particular the "recreate" style of
migration required by SQLite.
"recreate" style is as follows:
1. A new table is created with the new specification, based on the
migration directives within the batch, using a temporary name.
2. the data copied from the existing table to the new table.
3. the existing table is dropped.
4. the new table is renamed to the existing table name.
The directive by default will only use "recreate" style on the
SQLite backend, and only if directives are present which require
this form, e.g. anything other than ``add_column()``. The batch
operation on other backends will proceed using standard ALTER TABLE
operations.
The method is used as a context manager, which returns an instance
of :class:`.BatchOperations`; this object is the same as
:class:`.Operations` except that table names and schema names
are omitted. E.g.::
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column("foo", Integer))
batch_op.drop_column("bar")
The operations within the context manager are invoked at once
when the context is ended. When run against SQLite, if the
migrations include operations not supported by SQLite's ALTER TABLE,
the entire table will be copied to a new one with the new
specification, moving all data across as well.
The copy operation by default uses reflection to retrieve the current
structure of the table, and therefore :meth:`.batch_alter_table`
in this mode requires that the migration is run in "online" mode.
The ``copy_from`` parameter may be passed which refers to an existing
:class:`.Table` object, which will bypass this reflection step.
.. note:: The table copy operation will currently not copy
CHECK constraints, and may not copy UNIQUE constraints that are
unnamed, as is possible on SQLite. See the section
:ref:`sqlite_batch_constraints` for workarounds.
:param table_name: name of table
:param schema: optional schema name.
:param recreate: under what circumstances the table should be
recreated. At its default of ``"auto"``, the SQLite dialect will
recreate the table if any operations other than ``add_column()``,
``create_index()``, or ``drop_index()`` are
present. Other options include ``"always"`` and ``"never"``.
:param copy_from: optional :class:`~sqlalchemy.schema.Table` object
that will act as the structure of the table being copied. If omitted,
table reflection is used to retrieve the structure of the table.
.. seealso::
:ref:`batch_offline_mode`
:paramref:`~.Operations.batch_alter_table.reflect_args`
:paramref:`~.Operations.batch_alter_table.reflect_kwargs`
:param reflect_args: a sequence of additional positional arguments that
will be applied to the table structure being reflected / copied;
this may be used to pass column and constraint overrides to the
table that will be reflected, in lieu of passing the whole
:class:`~sqlalchemy.schema.Table` using
:paramref:`~.Operations.batch_alter_table.copy_from`.
:param reflect_kwargs: a dictionary of additional keyword arguments
that will be applied to the table structure being copied; this may be
used to pass additional table and reflection options to the table that
will be reflected, in lieu of passing the whole
:class:`~sqlalchemy.schema.Table` using
:paramref:`~.Operations.batch_alter_table.copy_from`.
:param table_args: a sequence of additional positional arguments that
will be applied to the new :class:`~sqlalchemy.schema.Table` when
created, in addition to those copied from the source table.
This may be used to provide additional constraints such as CHECK
constraints that may not be reflected.
:param table_kwargs: a dictionary of additional keyword arguments
that will be applied to the new :class:`~sqlalchemy.schema.Table`
when created, in addition to those copied from the source table.
This may be used to provide for additional table options that may
not be reflected.
:param naming_convention: a naming convention dictionary of the form
described at :ref:`autogen_naming_conventions` which will be applied
to the :class:`~sqlalchemy.schema.MetaData` during the reflection
process. This is typically required if one wants to drop SQLite
constraints, as these constraints will not have names when
reflected on this backend. Requires SQLAlchemy **0.9.4** or greater.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
:param partial_reordering: a list of tuples, each suggesting a desired
ordering of two or more columns in the newly created table. Requires
that :paramref:`.batch_alter_table.recreate` is set to ``"always"``.
Examples, given a table with columns "a", "b", "c", and "d":
Specify the order of all columns::
with op.batch_alter_table(
"some_table",
recreate="always",
partial_reordering=[("c", "d", "a", "b")],
) as batch_op:
pass
Ensure "d" appears before "c", and "b", appears before "a"::
with op.batch_alter_table(
"some_table",
recreate="always",
partial_reordering=[("d", "c"), ("b", "a")],
) as batch_op:
pass
The ordering of columns not included in the partial_reordering
set is undefined. Therefore it is best to specify the complete
ordering of all columns for best results.
.. note:: batch mode requires SQLAlchemy 0.8 or above.
.. seealso::
:ref:`batch_migrations`
"""
def bulk_insert(
table: Union[Table, TableClause],
rows: List[dict],
*,
multiinsert: bool = True,
) -> None:
"""Issue a "bulk insert" operation using the current
migration context.
This provides a means of representing an INSERT of multiple rows
which works equally well in the context of executing on a live
connection as well as that of generating a SQL script. In the
case of a SQL script, the values are rendered inline into the
statement.
e.g.::
from alembic import op
from datetime import date
from sqlalchemy.sql import table, column
from sqlalchemy import String, Integer, Date
# Create an ad-hoc table to use for the insert statement.
accounts_table = table(
"account",
column("id", Integer),
column("name", String),
column("create_date", Date),
)
op.bulk_insert(
accounts_table,
[
{
"id": 1,
"name": "John Smith",
"create_date": date(2010, 10, 5),
},
{
"id": 2,
"name": "Ed Williams",
"create_date": date(2007, 5, 27),
},
{
"id": 3,
"name": "Wendy Jones",
"create_date": date(2008, 8, 15),
},
],
)
When using --sql mode, some datatypes may not render inline
automatically, such as dates and other special types. When this
issue is present, :meth:`.Operations.inline_literal` may be used::
op.bulk_insert(
accounts_table,
[
{
"id": 1,
"name": "John Smith",
"create_date": op.inline_literal("2010-10-05"),
},
{
"id": 2,
"name": "Ed Williams",
"create_date": op.inline_literal("2007-05-27"),
},
{
"id": 3,
"name": "Wendy Jones",
"create_date": op.inline_literal("2008-08-15"),
},
],
multiinsert=False,
)
When using :meth:`.Operations.inline_literal` in conjunction with
:meth:`.Operations.bulk_insert`, in order for the statement to work
in "online" (e.g. non --sql) mode, the
:paramref:`~.Operations.bulk_insert.multiinsert`
flag should be set to ``False``, which will have the effect of
individual INSERT statements being emitted to the database, each
with a distinct VALUES clause, so that the "inline" values can
still be rendered, rather than attempting to pass the values
as bound parameters.
:param table: a table object which represents the target of the INSERT.
:param rows: a list of dictionaries indicating rows.
:param multiinsert: when at its default of True and --sql mode is not
enabled, the INSERT statement will be executed using
"executemany()" style, where all elements in the list of
dictionaries are passed as bound parameters in a single
list. Setting this to False results in individual INSERT
statements being emitted per parameter set, and is needed
in those cases where non-literal values are present in the
parameter sets.
"""
def create_check_constraint(
constraint_name: Optional[str],
table_name: str,
condition: Union[str, ColumnElement[bool], TextClause],
*,
schema: Optional[str] = None,
**kw: Any,
) -> None:
"""Issue a "create check constraint" instruction using the
current migration context.
e.g.::
from alembic import op
from sqlalchemy.sql import column, func
op.create_check_constraint(
"ck_user_name_len",
"user",
func.len(column("name")) > 5,
)
CHECK constraints are usually against a SQL expression, so ad-hoc
table metadata is usually needed. The function will convert the given
arguments into a :class:`sqlalchemy.schema.CheckConstraint` bound
to an anonymous table in order to emit the CREATE statement.
:param name: Name of the check constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`,
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param table_name: String name of the source table.
:param condition: SQL expression that's the condition of the
constraint. Can be a string or SQLAlchemy expression language
structure.
:param deferrable: optional bool. If set, emit DEFERRABLE or
NOT DEFERRABLE when issuing DDL for this constraint.
:param initially: optional string. If set, emit INITIALLY <value>
when issuing DDL for this constraint.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def create_exclude_constraint(
constraint_name: str, table_name: str, *elements: Any, **kw: Any
) -> Optional[Table]:
"""Issue an alter to create an EXCLUDE constraint using the
current migration context.
.. note:: This method is Postgresql specific, and additionally
requires at least SQLAlchemy 1.0.
e.g.::
from alembic import op
op.create_exclude_constraint(
"user_excl",
"user",
("period", "&&"),
("group", "="),
where=("group != 'some group'"),
)
Note that the expressions work the same way as that of
the ``ExcludeConstraint`` object itself; if plain strings are
passed, quoting rules must be applied manually.
:param name: Name of the constraint.
:param table_name: String name of the source table.
:param elements: exclude conditions.
:param where: SQL expression or SQL string with optional WHERE
clause.
:param deferrable: optional bool. If set, emit DEFERRABLE or
NOT DEFERRABLE when issuing DDL for this constraint.
:param initially: optional string. If set, emit INITIALLY <value>
when issuing DDL for this constraint.
:param schema: Optional schema name to operate within.
"""
def create_foreign_key(
constraint_name: Optional[str],
source_table: str,
referent_table: str,
local_cols: List[str],
remote_cols: List[str],
*,
onupdate: Optional[str] = None,
ondelete: Optional[str] = None,
deferrable: Optional[bool] = None,
initially: Optional[str] = None,
match: Optional[str] = None,
source_schema: Optional[str] = None,
referent_schema: Optional[str] = None,
**dialect_kw: Any,
) -> None:
"""Issue a "create foreign key" instruction using the
current migration context.
e.g.::
from alembic import op
op.create_foreign_key(
"fk_user_address",
"address",
"user",
["user_id"],
["id"],
)
This internally generates a :class:`~sqlalchemy.schema.Table` object
containing the necessary columns, then generates a new
:class:`~sqlalchemy.schema.ForeignKeyConstraint`
object which it then associates with the
:class:`~sqlalchemy.schema.Table`.
Any event listeners associated with this action will be fired
off normally. The :class:`~sqlalchemy.schema.AddConstraint`
construct is ultimately used to generate the ALTER statement.
:param constraint_name: Name of the foreign key constraint. The name
is necessary so that an ALTER statement can be emitted. For setups
that use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`,
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param source_table: String name of the source table.
:param referent_table: String name of the destination table.
:param local_cols: a list of string column names in the
source table.
:param remote_cols: a list of string column names in the
remote table.
:param onupdate: Optional string. If set, emit ON UPDATE <value> when
issuing DDL for this constraint. Typical values include CASCADE,
DELETE and RESTRICT.
:param ondelete: Optional string. If set, emit ON DELETE <value> when
issuing DDL for this constraint. Typical values include CASCADE,
DELETE and RESTRICT.
:param deferrable: optional bool. If set, emit DEFERRABLE or NOT
DEFERRABLE when issuing DDL for this constraint.
:param source_schema: Optional schema name of the source table.
:param referent_schema: Optional schema name of the destination table.
"""
def create_index(
index_name: Optional[str],
table_name: str,
columns: Sequence[Union[str, TextClause, Function[Any]]],
*,
schema: Optional[str] = None,
unique: bool = False,
if_not_exists: Optional[bool] = None,
**kw: Any,
) -> None:
r"""Issue a "create index" instruction using the current
migration context.
e.g.::
from alembic import op
op.create_index("ik_test", "t1", ["foo", "bar"])
Functional indexes can be produced by using the
:func:`sqlalchemy.sql.expression.text` construct::
from alembic import op
from sqlalchemy import text
op.create_index("ik_test", "t1", [text("lower(foo)")])
:param index_name: name of the index.
:param table_name: name of the owning table.
:param columns: a list consisting of string column names and/or
:func:`~sqlalchemy.sql.expression.text` constructs.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param unique: If True, create a unique index.
:param quote: Force quoting of this column's name on or off,
corresponding to ``True`` or ``False``. When left at its default
of ``None``, the column identifier will be quoted according to
whether the name is case sensitive (identifiers with at least one
upper case character are treated as case sensitive), or if it's a
reserved word. This flag is only needed to force quoting of a
reserved word which is not known by the SQLAlchemy dialect.
:param if_not_exists: If True, adds IF NOT EXISTS operator when
creating the new index.
.. versionadded:: 1.12.0
:param \**kw: Additional keyword arguments not mentioned above are
dialect specific, and passed in the form
``<dialectname>_<argname>``.
See the documentation regarding an individual dialect at
:ref:`dialect_toplevel` for detail on documented arguments.
"""
def create_primary_key(
constraint_name: Optional[str],
table_name: str,
columns: List[str],
*,
schema: Optional[str] = None,
) -> None:
"""Issue a "create primary key" instruction using the current
migration context.
e.g.::
from alembic import op
op.create_primary_key("pk_my_table", "my_table", ["id", "version"])
This internally generates a :class:`~sqlalchemy.schema.Table` object
containing the necessary columns, then generates a new
:class:`~sqlalchemy.schema.PrimaryKeyConstraint`
object which it then associates with the
:class:`~sqlalchemy.schema.Table`.
Any event listeners associated with this action will be fired
off normally. The :class:`~sqlalchemy.schema.AddConstraint`
construct is ultimately used to generate the ALTER statement.
:param constraint_name: Name of the primary key constraint. The name
is necessary so that an ALTER statement can be emitted. For setups
that use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param table_name: String name of the target table.
:param columns: a list of string column names to be applied to the
primary key constraint.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def create_table(table_name: str, *columns: SchemaItem, **kw: Any) -> Table:
r"""Issue a "create table" instruction using the current migration
context.
This directive receives an argument list similar to that of the
traditional :class:`sqlalchemy.schema.Table` construct, but without the
metadata::
from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op
op.create_table(
"account",
Column("id", INTEGER, primary_key=True),
Column("name", VARCHAR(50), nullable=False),
Column("description", NVARCHAR(200)),
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
Note that :meth:`.create_table` accepts
:class:`~sqlalchemy.schema.Column`
constructs directly from the SQLAlchemy library. In particular,
default values to be created on the database side are
specified using the ``server_default`` parameter, and not
``default`` which only specifies Python-side defaults::
from alembic import op
from sqlalchemy import Column, TIMESTAMP, func
# specify "DEFAULT NOW" along with the "timestamp" column
op.create_table(
"account",
Column("id", INTEGER, primary_key=True),
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
The function also returns a newly created
:class:`~sqlalchemy.schema.Table` object, corresponding to the table
specification given, which is suitable for
immediate SQL operations, in particular
:meth:`.Operations.bulk_insert`::
from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op
account_table = op.create_table(
"account",
Column("id", INTEGER, primary_key=True),
Column("name", VARCHAR(50), nullable=False),
Column("description", NVARCHAR(200)),
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
op.bulk_insert(
account_table,
[
{"name": "A1", "description": "account 1"},
{"name": "A2", "description": "account 2"},
],
)
:param table_name: Name of the table
:param \*columns: collection of :class:`~sqlalchemy.schema.Column`
objects within
the table, as well as optional :class:`~sqlalchemy.schema.Constraint`
objects
and :class:`~.sqlalchemy.schema.Index` objects.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param \**kw: Other keyword arguments are passed to the underlying
:class:`sqlalchemy.schema.Table` object created for the command.
:return: the :class:`~sqlalchemy.schema.Table` object corresponding
to the parameters given.
"""
def create_table_comment(
table_name: str,
comment: Optional[str],
*,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
) -> None:
"""Emit a COMMENT ON operation to set the comment for a table.
:param table_name: string name of the target table.
:param comment: string value of the comment being registered against
the specified table.
:param existing_comment: String value of a comment
already registered on the specified table, used within autogenerate
so that the operation is reversible, but not required for direct
use.
.. seealso::
:meth:`.Operations.drop_table_comment`
:paramref:`.Operations.alter_column.comment`
"""
def create_unique_constraint(
constraint_name: Optional[str],
table_name: str,
columns: Sequence[str],
*,
schema: Optional[str] = None,
**kw: Any,
) -> Any:
"""Issue a "create unique constraint" instruction using the
current migration context.
e.g.::
from alembic import op
op.create_unique_constraint("uq_user_name", "user", ["name"])
This internally generates a :class:`~sqlalchemy.schema.Table` object
containing the necessary columns, then generates a new
:class:`~sqlalchemy.schema.UniqueConstraint`
object which it then associates with the
:class:`~sqlalchemy.schema.Table`.
Any event listeners associated with this action will be fired
off normally. The :class:`~sqlalchemy.schema.AddConstraint`
construct is ultimately used to generate the ALTER statement.
:param name: Name of the unique constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`,
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param table_name: String name of the source table.
:param columns: a list of string column names in the
source table.
:param deferrable: optional bool. If set, emit DEFERRABLE or
NOT DEFERRABLE when issuing DDL for this constraint.
:param initially: optional string. If set, emit INITIALLY <value>
when issuing DDL for this constraint.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def drop_column(
table_name: str,
column_name: str,
*,
schema: Optional[str] = None,
**kw: Any,
) -> None:
"""Issue a "drop column" instruction using the current
migration context.
e.g.::
drop_column("organization", "account_id")
:param table_name: name of table
:param column_name: name of column
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param mssql_drop_check: Optional boolean. When ``True``, on
Microsoft SQL Server only, first
drop the CHECK constraint on the column using a
SQL-script-compatible
block that selects into a @variable from sys.check_constraints,
then exec's a separate DROP CONSTRAINT for that constraint.
:param mssql_drop_default: Optional boolean. When ``True``, on
Microsoft SQL Server only, first
drop the DEFAULT constraint on the column using a
SQL-script-compatible
block that selects into a @variable from sys.default_constraints,
then exec's a separate DROP CONSTRAINT for that default.
:param mssql_drop_foreign_key: Optional boolean. When ``True``, on
Microsoft SQL Server only, first
drop a single FOREIGN KEY constraint on the column using a
SQL-script-compatible
block that selects into a @variable from
sys.foreign_keys/sys.foreign_key_columns,
then exec's a separate DROP CONSTRAINT for that default. Only
works if the column has exactly one FK constraint which refers to
it, at the moment.
"""
def drop_constraint(
constraint_name: str,
table_name: str,
type_: Optional[str] = None,
*,
schema: Optional[str] = None,
) -> None:
r"""Drop a constraint of the given name, typically via DROP CONSTRAINT.
:param constraint_name: name of the constraint.
:param table_name: table name.
:param type\_: optional, required on MySQL. can be
'foreignkey', 'primary', 'unique', or 'check'.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def drop_index(
index_name: str,
table_name: Optional[str] = None,
*,
schema: Optional[str] = None,
if_exists: Optional[bool] = None,
**kw: Any,
) -> None:
r"""Issue a "drop index" instruction using the current
migration context.
e.g.::
drop_index("accounts")
:param index_name: name of the index.
:param table_name: name of the owning table. Some
backends such as Microsoft SQL Server require this.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param if_exists: If True, adds IF EXISTS operator when
dropping the index.
.. versionadded:: 1.12.0
:param \**kw: Additional keyword arguments not mentioned above are
dialect specific, and passed in the form
``<dialectname>_<argname>``.
See the documentation regarding an individual dialect at
:ref:`dialect_toplevel` for detail on documented arguments.
"""
def drop_table(
table_name: str, *, schema: Optional[str] = None, **kw: Any
) -> None:
r"""Issue a "drop table" instruction using the current
migration context.
e.g.::
drop_table("accounts")
:param table_name: Name of the table
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param \**kw: Other keyword arguments are passed to the underlying
:class:`sqlalchemy.schema.Table` object created for the command.
"""
def drop_table_comment(
table_name: str,
*,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
) -> None:
"""Issue a "drop table comment" operation to
remove an existing comment set on a table.
:param table_name: string name of the target table.
:param existing_comment: An optional string value of a comment already
registered on the specified table.
.. seealso::
:meth:`.Operations.create_table_comment`
:paramref:`.Operations.alter_column.comment`
"""
def execute(
sqltext: Union[str, TextClause, Update],
*,
execution_options: Optional[dict[str, Any]] = None,
) -> None:
r"""Execute the given SQL using the current migration context.
The given SQL can be a plain string, e.g.::
op.execute("INSERT INTO table (foo) VALUES ('some value')")
Or it can be any kind of Core SQL Expression construct, such as
below where we use an update construct::
from sqlalchemy.sql import table, column
from sqlalchemy import String
from alembic import op
account = table("account", column("name", String))
op.execute(
account.update()
.where(account.c.name == op.inline_literal("account 1"))
.values({"name": op.inline_literal("account 2")})
)
Above, we made use of the SQLAlchemy
:func:`sqlalchemy.sql.expression.table` and
:func:`sqlalchemy.sql.expression.column` constructs to make a brief,
ad-hoc table construct just for our UPDATE statement. A full
:class:`~sqlalchemy.schema.Table` construct of course works perfectly
fine as well, though note it's a recommended practice to at least
ensure the definition of a table is self-contained within the migration
script, rather than imported from a module that may break compatibility
with older migrations.
In a SQL script context, the statement is emitted directly to the
output stream. There is *no* return result, however, as this
function is oriented towards generating a change script
that can run in "offline" mode. Additionally, parameterized
statements are discouraged here, as they *will not work* in offline
mode. Above, we use :meth:`.inline_literal` where parameters are
to be used.
For full interaction with a connected database where parameters can
also be used normally, use the "bind" available from the context::
from alembic import op
connection = op.get_bind()
connection.execute(
account.update()
.where(account.c.name == "account 1")
.values({"name": "account 2"})
)
Additionally, when passing the statement as a plain string, it is first
coerceed into a :func:`sqlalchemy.sql.expression.text` construct
before being passed along. In the less likely case that the
literal SQL string contains a colon, it must be escaped with a
backslash, as::
op.execute(r"INSERT INTO table (foo) VALUES ('\:colon_value')")
:param sqltext: Any legal SQLAlchemy expression, including:
* a string
* a :func:`sqlalchemy.sql.expression.text` construct.
* a :func:`sqlalchemy.sql.expression.insert` construct.
* a :func:`sqlalchemy.sql.expression.update`,
:func:`sqlalchemy.sql.expression.insert`,
or :func:`sqlalchemy.sql.expression.delete` construct.
* Any "executable" described in SQLAlchemy Core documentation,
noting that no result set is returned.
.. note:: when passing a plain string, the statement is coerced into
a :func:`sqlalchemy.sql.expression.text` construct. This construct
considers symbols with colons, e.g. ``:foo`` to be bound parameters.
To avoid this, ensure that colon symbols are escaped, e.g.
``\:foo``.
:param execution_options: Optional dictionary of
execution options, will be passed to
:meth:`sqlalchemy.engine.Connection.execution_options`.
"""
def f(name: str) -> conv:
"""Indicate a string name that has already had a naming convention
applied to it.
This feature combines with the SQLAlchemy ``naming_convention`` feature
to disambiguate constraint names that have already had naming
conventions applied to them, versus those that have not. This is
necessary in the case that the ``"%(constraint_name)s"`` token
is used within a naming convention, so that it can be identified
that this particular name should remain fixed.
If the :meth:`.Operations.f` is used on a constraint, the naming
convention will not take effect::
op.add_column("t", "x", Boolean(name=op.f("ck_bool_t_x")))
Above, the CHECK constraint generated will have the name
``ck_bool_t_x`` regardless of whether or not a naming convention is
in use.
Alternatively, if a naming convention is in use, and 'f' is not used,
names will be converted along conventions. If the ``target_metadata``
contains the naming convention
``{"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}``, then the
output of the following:
op.add_column("t", "x", Boolean(name="x"))
will be::
CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))
The function is rendered in the output of autogenerate when
a particular constraint name is already converted.
"""
def get_bind() -> Connection:
"""Return the current 'bind'.
Under normal circumstances, this is the
:class:`~sqlalchemy.engine.Connection` currently being used
to emit SQL to the database.
In a SQL script context, this value is ``None``. [TODO: verify this]
"""
def get_context() -> MigrationContext:
"""Return the :class:`.MigrationContext` object that's
currently in use.
"""
def implementation_for(op_cls: Any) -> Callable[..., Any]:
"""Register an implementation for a given :class:`.MigrateOperation`.
This is part of the operation extensibility API.
.. seealso::
:ref:`operation_plugins` - example of use
"""
def inline_literal(
value: Union[str, int], type_: Optional[TypeEngine] = None
) -> _literal_bindparam:
r"""Produce an 'inline literal' expression, suitable for
using in an INSERT, UPDATE, or DELETE statement.
When using Alembic in "offline" mode, CRUD operations
aren't compatible with SQLAlchemy's default behavior surrounding
literal values,
which is that they are converted into bound values and passed
separately into the ``execute()`` method of the DBAPI cursor.
An offline SQL
script needs to have these rendered inline. While it should
always be noted that inline literal values are an **enormous**
security hole in an application that handles untrusted input,
a schema migration is not run in this context, so
literals are safe to render inline, with the caveat that
advanced types like dates may not be supported directly
by SQLAlchemy.
See :meth:`.Operations.execute` for an example usage of
:meth:`.Operations.inline_literal`.
The environment can also be configured to attempt to render
"literal" values inline automatically, for those simple types
that are supported by the dialect; see
:paramref:`.EnvironmentContext.configure.literal_binds` for this
more recently added feature.
:param value: The value to render. Strings, integers, and simple
numerics should be supported. Other types like boolean,
dates, etc. may or may not be supported yet by various
backends.
:param type\_: optional - a :class:`sqlalchemy.types.TypeEngine`
subclass stating the type of this value. In SQLAlchemy
expressions, this is usually derived automatically
from the Python type of the value itself, as well as
based on the context in which the value is used.
.. seealso::
:paramref:`.EnvironmentContext.configure.literal_binds`
"""
def invoke(operation: MigrateOperation) -> Any:
"""Given a :class:`.MigrateOperation`, invoke it in terms of
this :class:`.Operations` instance.
"""
def register_operation(
name: str, sourcename: Optional[str] = None
) -> Callable[[_T], _T]:
"""Register a new operation for this class.
This method is normally used to add new operations
to the :class:`.Operations` class, and possibly the
:class:`.BatchOperations` class as well. All Alembic migration
operations are implemented via this system, however the system
is also available as a public API to facilitate adding custom
operations.
.. seealso::
:ref:`operation_plugins`
"""
def rename_table(
old_table_name: str, new_table_name: str, *, schema: Optional[str] = None
) -> None:
"""Emit an ALTER TABLE to rename a table.
:param old_table_name: old name.
:param new_table_name: new name.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def run_async(
async_function: Callable[..., Awaitable[_T]], *args: Any, **kw_args: Any
) -> _T:
"""Invoke the given asynchronous callable, passing an asynchronous
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` as the first
argument.
This method allows calling async functions from within the
synchronous ``upgrade()`` or ``downgrade()`` alembic migration
method.
The async connection passed to the callable shares the same
transaction as the connection running in the migration context.
Any additional arg or kw_arg passed to this function are passed
to the provided async function.
.. versionadded: 1.11
.. note::
This method can be called only when alembic is called using
an async dialect.
"""
| # ### this file stubs are generated by tools/write_pyi.py - do not edit ###
# ### imports are manually managed
from __future__ import annotations
from contextlib import contextmanager
from typing import Any
from typing import Awaitable
from typing import Callable
from typing import Dict
from typing import Iterator
from typing import List
from typing import Literal
from typing import Mapping
from typing import Optional
from typing import Sequence
from typing import Tuple
from typing import Type
from typing import TYPE_CHECKING
from typing import TypeVar
from typing import Union
from sqlalchemy.sql.expression import TableClause
from sqlalchemy.sql.expression import Update
if TYPE_CHECKING:
from sqlalchemy.engine import Connection
from sqlalchemy.sql.elements import ColumnElement
from sqlalchemy.sql.elements import conv
from sqlalchemy.sql.elements import TextClause
from sqlalchemy.sql.functions import Function
from sqlalchemy.sql.schema import Column
from sqlalchemy.sql.schema import Computed
from sqlalchemy.sql.schema import Identity
from sqlalchemy.sql.schema import SchemaItem
from sqlalchemy.sql.schema import Table
from sqlalchemy.sql.type_api import TypeEngine
from sqlalchemy.util import immutabledict
from .operations.ops import BatchOperations
from .operations.ops import MigrateOperation
from .runtime.migration import MigrationContext
from .util.sqla_compat import _literal_bindparam
_T = TypeVar("_T")
### end imports ###
def add_column(
table_name: str, column: Column[Any], *, schema: Optional[str] = None
) -> None:
"""Issue an "add column" instruction using the current
migration context.
e.g.::
from alembic import op
from sqlalchemy import Column, String
op.add_column("organization", Column("name", String()))
The :meth:`.Operations.add_column` method typically corresponds
to the SQL command "ALTER TABLE... ADD COLUMN". Within the scope
of this command, the column's name, datatype, nullability,
and optional server-generated defaults may be indicated.
.. note::
With the exception of NOT NULL constraints or single-column FOREIGN
KEY constraints, other kinds of constraints such as PRIMARY KEY,
UNIQUE or CHECK constraints **cannot** be generated using this
method; for these constraints, refer to operations such as
:meth:`.Operations.create_primary_key` and
:meth:`.Operations.create_check_constraint`. In particular, the
following :class:`~sqlalchemy.schema.Column` parameters are
**ignored**:
* :paramref:`~sqlalchemy.schema.Column.primary_key` - SQL databases
typically do not support an ALTER operation that can add
individual columns one at a time to an existing primary key
constraint, therefore it's less ambiguous to use the
:meth:`.Operations.create_primary_key` method, which assumes no
existing primary key constraint is present.
* :paramref:`~sqlalchemy.schema.Column.unique` - use the
:meth:`.Operations.create_unique_constraint` method
* :paramref:`~sqlalchemy.schema.Column.index` - use the
:meth:`.Operations.create_index` method
The provided :class:`~sqlalchemy.schema.Column` object may include a
:class:`~sqlalchemy.schema.ForeignKey` constraint directive,
referencing a remote table name. For this specific type of constraint,
Alembic will automatically emit a second ALTER statement in order to
add the single-column FOREIGN KEY constraint separately::
from alembic import op
from sqlalchemy import Column, INTEGER, ForeignKey
op.add_column(
"organization",
Column("account_id", INTEGER, ForeignKey("accounts.id")),
)
The column argument passed to :meth:`.Operations.add_column` is a
:class:`~sqlalchemy.schema.Column` construct, used in the same way it's
used in SQLAlchemy. In particular, values or functions to be indicated
as producing the column's default value on the database side are
specified using the ``server_default`` parameter, and not ``default``
which only specifies Python-side defaults::
from alembic import op
from sqlalchemy import Column, TIMESTAMP, func
# specify "DEFAULT NOW" along with the column add
op.add_column(
"account",
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
:param table_name: String name of the parent table.
:param column: a :class:`sqlalchemy.schema.Column` object
representing the new column.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def alter_column(
table_name: str,
column_name: str,
*,
nullable: Optional[bool] = None,
comment: Union[str, Literal[False], None] = False,
server_default: Any = False,
new_column_name: Optional[str] = None,
type_: Union[TypeEngine, Type[TypeEngine], None] = None,
existing_type: Union[TypeEngine, Type[TypeEngine], None] = None,
existing_server_default: Union[
str, bool, Identity, Computed, None
] = False,
existing_nullable: Optional[bool] = None,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
**kw: Any,
) -> None:
r"""Issue an "alter column" instruction using the
current migration context.
Generally, only that aspect of the column which
is being changed, i.e. name, type, nullability,
default, needs to be specified. Multiple changes
can also be specified at once and the backend should
"do the right thing", emitting each change either
separately or together as the backend allows.
MySQL has special requirements here, since MySQL
cannot ALTER a column without a full specification.
When producing MySQL-compatible migration files,
it is recommended that the ``existing_type``,
``existing_server_default``, and ``existing_nullable``
parameters be present, if not being altered.
Type changes which are against the SQLAlchemy
"schema" types :class:`~sqlalchemy.types.Boolean`
and :class:`~sqlalchemy.types.Enum` may also
add or drop constraints which accompany those
types on backends that don't support them natively.
The ``existing_type`` argument is
used in this case to identify and remove a previous
constraint that was bound to the type object.
:param table_name: string name of the target table.
:param column_name: string name of the target column,
as it exists before the operation begins.
:param nullable: Optional; specify ``True`` or ``False``
to alter the column's nullability.
:param server_default: Optional; specify a string
SQL expression, :func:`~sqlalchemy.sql.expression.text`,
or :class:`~sqlalchemy.schema.DefaultClause` to indicate
an alteration to the column's default value.
Set to ``None`` to have the default removed.
:param comment: optional string text of a new comment to add to the
column.
:param new_column_name: Optional; specify a string name here to
indicate the new name within a column rename operation.
:param type\_: Optional; a :class:`~sqlalchemy.types.TypeEngine`
type object to specify a change to the column's type.
For SQLAlchemy types that also indicate a constraint (i.e.
:class:`~sqlalchemy.types.Boolean`, :class:`~sqlalchemy.types.Enum`),
the constraint is also generated.
:param autoincrement: set the ``AUTO_INCREMENT`` flag of the column;
currently understood by the MySQL dialect.
:param existing_type: Optional; a
:class:`~sqlalchemy.types.TypeEngine`
type object to specify the previous type. This
is required for all MySQL column alter operations that
don't otherwise specify a new type, as well as for
when nullability is being changed on a SQL Server
column. It is also used if the type is a so-called
SQLAlchemy "schema" type which may define a constraint (i.e.
:class:`~sqlalchemy.types.Boolean`,
:class:`~sqlalchemy.types.Enum`),
so that the constraint can be dropped.
:param existing_server_default: Optional; The existing
default value of the column. Required on MySQL if
an existing default is not being changed; else MySQL
removes the default.
:param existing_nullable: Optional; the existing nullability
of the column. Required on MySQL if the existing nullability
is not being changed; else MySQL sets this to NULL.
:param existing_autoincrement: Optional; the existing autoincrement
of the column. Used for MySQL's system of altering a column
that specifies ``AUTO_INCREMENT``.
:param existing_comment: string text of the existing comment on the
column to be maintained. Required on MySQL if the existing comment
on the column is not being changed.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param postgresql_using: String argument which will indicate a
SQL expression to render within the Postgresql-specific USING clause
within ALTER COLUMN. This string is taken directly as raw SQL which
must explicitly include any necessary quoting or escaping of tokens
within the expression.
"""
@contextmanager
def batch_alter_table(
table_name: str,
schema: Optional[str] = None,
recreate: Literal["auto", "always", "never"] = "auto",
partial_reordering: Optional[tuple] = None,
copy_from: Optional[Table] = None,
table_args: Tuple[Any, ...] = (),
table_kwargs: Mapping[str, Any] = immutabledict({}),
reflect_args: Tuple[Any, ...] = (),
reflect_kwargs: Mapping[str, Any] = immutabledict({}),
naming_convention: Optional[Dict[str, str]] = None,
) -> Iterator[BatchOperations]:
"""Invoke a series of per-table migrations in batch.
Batch mode allows a series of operations specific to a table
to be syntactically grouped together, and allows for alternate
modes of table migration, in particular the "recreate" style of
migration required by SQLite.
"recreate" style is as follows:
1. A new table is created with the new specification, based on the
migration directives within the batch, using a temporary name.
2. the data copied from the existing table to the new table.
3. the existing table is dropped.
4. the new table is renamed to the existing table name.
The directive by default will only use "recreate" style on the
SQLite backend, and only if directives are present which require
this form, e.g. anything other than ``add_column()``. The batch
operation on other backends will proceed using standard ALTER TABLE
operations.
The method is used as a context manager, which returns an instance
of :class:`.BatchOperations`; this object is the same as
:class:`.Operations` except that table names and schema names
are omitted. E.g.::
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column("foo", Integer))
batch_op.drop_column("bar")
The operations within the context manager are invoked at once
when the context is ended. When run against SQLite, if the
migrations include operations not supported by SQLite's ALTER TABLE,
the entire table will be copied to a new one with the new
specification, moving all data across as well.
The copy operation by default uses reflection to retrieve the current
structure of the table, and therefore :meth:`.batch_alter_table`
in this mode requires that the migration is run in "online" mode.
The ``copy_from`` parameter may be passed which refers to an existing
:class:`.Table` object, which will bypass this reflection step.
.. note:: The table copy operation will currently not copy
CHECK constraints, and may not copy UNIQUE constraints that are
unnamed, as is possible on SQLite. See the section
:ref:`sqlite_batch_constraints` for workarounds.
:param table_name: name of table
:param schema: optional schema name.
:param recreate: under what circumstances the table should be
recreated. At its default of ``"auto"``, the SQLite dialect will
recreate the table if any operations other than ``add_column()``,
``create_index()``, or ``drop_index()`` are
present. Other options include ``"always"`` and ``"never"``.
:param copy_from: optional :class:`~sqlalchemy.schema.Table` object
that will act as the structure of the table being copied. If omitted,
table reflection is used to retrieve the structure of the table.
.. seealso::
:ref:`batch_offline_mode`
:paramref:`~.Operations.batch_alter_table.reflect_args`
:paramref:`~.Operations.batch_alter_table.reflect_kwargs`
:param reflect_args: a sequence of additional positional arguments that
will be applied to the table structure being reflected / copied;
this may be used to pass column and constraint overrides to the
table that will be reflected, in lieu of passing the whole
:class:`~sqlalchemy.schema.Table` using
:paramref:`~.Operations.batch_alter_table.copy_from`.
:param reflect_kwargs: a dictionary of additional keyword arguments
that will be applied to the table structure being copied; this may be
used to pass additional table and reflection options to the table that
will be reflected, in lieu of passing the whole
:class:`~sqlalchemy.schema.Table` using
:paramref:`~.Operations.batch_alter_table.copy_from`.
:param table_args: a sequence of additional positional arguments that
will be applied to the new :class:`~sqlalchemy.schema.Table` when
created, in addition to those copied from the source table.
This may be used to provide additional constraints such as CHECK
constraints that may not be reflected.
:param table_kwargs: a dictionary of additional keyword arguments
that will be applied to the new :class:`~sqlalchemy.schema.Table`
when created, in addition to those copied from the source table.
This may be used to provide for additional table options that may
not be reflected.
:param naming_convention: a naming convention dictionary of the form
described at :ref:`autogen_naming_conventions` which will be applied
to the :class:`~sqlalchemy.schema.MetaData` during the reflection
process. This is typically required if one wants to drop SQLite
constraints, as these constraints will not have names when
reflected on this backend. Requires SQLAlchemy **0.9.4** or greater.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
:param partial_reordering: a list of tuples, each suggesting a desired
ordering of two or more columns in the newly created table. Requires
that :paramref:`.batch_alter_table.recreate` is set to ``"always"``.
Examples, given a table with columns "a", "b", "c", and "d":
Specify the order of all columns::
with op.batch_alter_table(
"some_table",
recreate="always",
partial_reordering=[("c", "d", "a", "b")],
) as batch_op:
pass
Ensure "d" appears before "c", and "b", appears before "a"::
with op.batch_alter_table(
"some_table",
recreate="always",
partial_reordering=[("d", "c"), ("b", "a")],
) as batch_op:
pass
The ordering of columns not included in the partial_reordering
set is undefined. Therefore it is best to specify the complete
ordering of all columns for best results.
.. note:: batch mode requires SQLAlchemy 0.8 or above.
.. seealso::
:ref:`batch_migrations`
"""
def bulk_insert(
table: Union[Table, TableClause],
rows: List[dict],
*,
multiinsert: bool = True,
) -> None:
"""Issue a "bulk insert" operation using the current
migration context.
This provides a means of representing an INSERT of multiple rows
which works equally well in the context of executing on a live
connection as well as that of generating a SQL script. In the
case of a SQL script, the values are rendered inline into the
statement.
e.g.::
from alembic import op
from datetime import date
from sqlalchemy.sql import table, column
from sqlalchemy import String, Integer, Date
# Create an ad-hoc table to use for the insert statement.
accounts_table = table(
"account",
column("id", Integer),
column("name", String),
column("create_date", Date),
)
op.bulk_insert(
accounts_table,
[
{
"id": 1,
"name": "John Smith",
"create_date": date(2010, 10, 5),
},
{
"id": 2,
"name": "Ed Williams",
"create_date": date(2007, 5, 27),
},
{
"id": 3,
"name": "Wendy Jones",
"create_date": date(2008, 8, 15),
},
],
)
When using --sql mode, some datatypes may not render inline
automatically, such as dates and other special types. When this
issue is present, :meth:`.Operations.inline_literal` may be used::
op.bulk_insert(
accounts_table,
[
{
"id": 1,
"name": "John Smith",
"create_date": op.inline_literal("2010-10-05"),
},
{
"id": 2,
"name": "Ed Williams",
"create_date": op.inline_literal("2007-05-27"),
},
{
"id": 3,
"name": "Wendy Jones",
"create_date": op.inline_literal("2008-08-15"),
},
],
multiinsert=False,
)
When using :meth:`.Operations.inline_literal` in conjunction with
:meth:`.Operations.bulk_insert`, in order for the statement to work
in "online" (e.g. non --sql) mode, the
:paramref:`~.Operations.bulk_insert.multiinsert`
flag should be set to ``False``, which will have the effect of
individual INSERT statements being emitted to the database, each
with a distinct VALUES clause, so that the "inline" values can
still be rendered, rather than attempting to pass the values
as bound parameters.
:param table: a table object which represents the target of the INSERT.
:param rows: a list of dictionaries indicating rows.
:param multiinsert: when at its default of True and --sql mode is not
enabled, the INSERT statement will be executed using
"executemany()" style, where all elements in the list of
dictionaries are passed as bound parameters in a single
list. Setting this to False results in individual INSERT
statements being emitted per parameter set, and is needed
in those cases where non-literal values are present in the
parameter sets.
"""
def create_check_constraint(
constraint_name: Optional[str],
table_name: str,
condition: Union[str, ColumnElement[bool], TextClause],
*,
schema: Optional[str] = None,
**kw: Any,
) -> None:
"""Issue a "create check constraint" instruction using the
current migration context.
e.g.::
from alembic import op
from sqlalchemy.sql import column, func
op.create_check_constraint(
"ck_user_name_len",
"user",
func.len(column("name")) > 5,
)
CHECK constraints are usually against a SQL expression, so ad-hoc
table metadata is usually needed. The function will convert the given
arguments into a :class:`sqlalchemy.schema.CheckConstraint` bound
to an anonymous table in order to emit the CREATE statement.
:param name: Name of the check constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`,
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param table_name: String name of the source table.
:param condition: SQL expression that's the condition of the
constraint. Can be a string or SQLAlchemy expression language
structure.
:param deferrable: optional bool. If set, emit DEFERRABLE or
NOT DEFERRABLE when issuing DDL for this constraint.
:param initially: optional string. If set, emit INITIALLY <value>
when issuing DDL for this constraint.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def create_exclude_constraint(
constraint_name: str, table_name: str, *elements: Any, **kw: Any
) -> Optional[Table]:
"""Issue an alter to create an EXCLUDE constraint using the
current migration context.
.. note:: This method is Postgresql specific, and additionally
requires at least SQLAlchemy 1.0.
e.g.::
from alembic import op
op.create_exclude_constraint(
"user_excl",
"user",
("period", "&&"),
("group", "="),
where=("group != 'some group'"),
)
Note that the expressions work the same way as that of
the ``ExcludeConstraint`` object itself; if plain strings are
passed, quoting rules must be applied manually.
:param name: Name of the constraint.
:param table_name: String name of the source table.
:param elements: exclude conditions.
:param where: SQL expression or SQL string with optional WHERE
clause.
:param deferrable: optional bool. If set, emit DEFERRABLE or
NOT DEFERRABLE when issuing DDL for this constraint.
:param initially: optional string. If set, emit INITIALLY <value>
when issuing DDL for this constraint.
:param schema: Optional schema name to operate within.
"""
def create_foreign_key(
constraint_name: Optional[str],
source_table: str,
referent_table: str,
local_cols: List[str],
remote_cols: List[str],
*,
onupdate: Optional[str] = None,
ondelete: Optional[str] = None,
deferrable: Optional[bool] = None,
initially: Optional[str] = None,
match: Optional[str] = None,
source_schema: Optional[str] = None,
referent_schema: Optional[str] = None,
**dialect_kw: Any,
) -> None:
"""Issue a "create foreign key" instruction using the
current migration context.
e.g.::
from alembic import op
op.create_foreign_key(
"fk_user_address",
"address",
"user",
["user_id"],
["id"],
)
This internally generates a :class:`~sqlalchemy.schema.Table` object
containing the necessary columns, then generates a new
:class:`~sqlalchemy.schema.ForeignKeyConstraint`
object which it then associates with the
:class:`~sqlalchemy.schema.Table`.
Any event listeners associated with this action will be fired
off normally. The :class:`~sqlalchemy.schema.AddConstraint`
construct is ultimately used to generate the ALTER statement.
:param constraint_name: Name of the foreign key constraint. The name
is necessary so that an ALTER statement can be emitted. For setups
that use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`,
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param source_table: String name of the source table.
:param referent_table: String name of the destination table.
:param local_cols: a list of string column names in the
source table.
:param remote_cols: a list of string column names in the
remote table.
:param onupdate: Optional string. If set, emit ON UPDATE <value> when
issuing DDL for this constraint. Typical values include CASCADE,
DELETE and RESTRICT.
:param ondelete: Optional string. If set, emit ON DELETE <value> when
issuing DDL for this constraint. Typical values include CASCADE,
DELETE and RESTRICT.
:param deferrable: optional bool. If set, emit DEFERRABLE or NOT
DEFERRABLE when issuing DDL for this constraint.
:param source_schema: Optional schema name of the source table.
:param referent_schema: Optional schema name of the destination table.
"""
def create_index(
index_name: Optional[str],
table_name: str,
columns: Sequence[Union[str, TextClause, Function[Any]]],
*,
schema: Optional[str] = None,
unique: bool = False,
if_not_exists: Optional[bool] = None,
**kw: Any,
) -> None:
r"""Issue a "create index" instruction using the current
migration context.
e.g.::
from alembic import op
op.create_index("ik_test", "t1", ["foo", "bar"])
Functional indexes can be produced by using the
:func:`sqlalchemy.sql.expression.text` construct::
from alembic import op
from sqlalchemy import text
op.create_index("ik_test", "t1", [text("lower(foo)")])
:param index_name: name of the index.
:param table_name: name of the owning table.
:param columns: a list consisting of string column names and/or
:func:`~sqlalchemy.sql.expression.text` constructs.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param unique: If True, create a unique index.
:param quote: Force quoting of this column's name on or off,
corresponding to ``True`` or ``False``. When left at its default
of ``None``, the column identifier will be quoted according to
whether the name is case sensitive (identifiers with at least one
upper case character are treated as case sensitive), or if it's a
reserved word. This flag is only needed to force quoting of a
reserved word which is not known by the SQLAlchemy dialect.
:param if_not_exists: If True, adds IF NOT EXISTS operator when
creating the new index.
.. versionadded:: 1.12.0
:param \**kw: Additional keyword arguments not mentioned above are
dialect specific, and passed in the form
``<dialectname>_<argname>``.
See the documentation regarding an individual dialect at
:ref:`dialect_toplevel` for detail on documented arguments.
"""
def create_primary_key(
constraint_name: Optional[str],
table_name: str,
columns: List[str],
*,
schema: Optional[str] = None,
) -> None:
"""Issue a "create primary key" instruction using the current
migration context.
e.g.::
from alembic import op
op.create_primary_key("pk_my_table", "my_table", ["id", "version"])
This internally generates a :class:`~sqlalchemy.schema.Table` object
containing the necessary columns, then generates a new
:class:`~sqlalchemy.schema.PrimaryKeyConstraint`
object which it then associates with the
:class:`~sqlalchemy.schema.Table`.
Any event listeners associated with this action will be fired
off normally. The :class:`~sqlalchemy.schema.AddConstraint`
construct is ultimately used to generate the ALTER statement.
:param constraint_name: Name of the primary key constraint. The name
is necessary so that an ALTER statement can be emitted. For setups
that use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param table_name: String name of the target table.
:param columns: a list of string column names to be applied to the
primary key constraint.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def create_table(table_name: str, *columns: SchemaItem, **kw: Any) -> Table:
r"""Issue a "create table" instruction using the current migration
context.
This directive receives an argument list similar to that of the
traditional :class:`sqlalchemy.schema.Table` construct, but without the
metadata::
from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op
op.create_table(
"account",
Column("id", INTEGER, primary_key=True),
Column("name", VARCHAR(50), nullable=False),
Column("description", NVARCHAR(200)),
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
Note that :meth:`.create_table` accepts
:class:`~sqlalchemy.schema.Column`
constructs directly from the SQLAlchemy library. In particular,
default values to be created on the database side are
specified using the ``server_default`` parameter, and not
``default`` which only specifies Python-side defaults::
from alembic import op
from sqlalchemy import Column, TIMESTAMP, func
# specify "DEFAULT NOW" along with the "timestamp" column
op.create_table(
"account",
Column("id", INTEGER, primary_key=True),
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
The function also returns a newly created
:class:`~sqlalchemy.schema.Table` object, corresponding to the table
specification given, which is suitable for
immediate SQL operations, in particular
:meth:`.Operations.bulk_insert`::
from sqlalchemy import INTEGER, VARCHAR, NVARCHAR, Column
from alembic import op
account_table = op.create_table(
"account",
Column("id", INTEGER, primary_key=True),
Column("name", VARCHAR(50), nullable=False),
Column("description", NVARCHAR(200)),
Column("timestamp", TIMESTAMP, server_default=func.now()),
)
op.bulk_insert(
account_table,
[
{"name": "A1", "description": "account 1"},
{"name": "A2", "description": "account 2"},
],
)
:param table_name: Name of the table
:param \*columns: collection of :class:`~sqlalchemy.schema.Column`
objects within
the table, as well as optional :class:`~sqlalchemy.schema.Constraint`
objects
and :class:`~.sqlalchemy.schema.Index` objects.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param \**kw: Other keyword arguments are passed to the underlying
:class:`sqlalchemy.schema.Table` object created for the command.
:return: the :class:`~sqlalchemy.schema.Table` object corresponding
to the parameters given.
"""
def create_table_comment(
table_name: str,
comment: Optional[str],
*,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
) -> None:
"""Emit a COMMENT ON operation to set the comment for a table.
:param table_name: string name of the target table.
:param comment: string value of the comment being registered against
the specified table.
:param existing_comment: String value of a comment
already registered on the specified table, used within autogenerate
so that the operation is reversible, but not required for direct
use.
.. seealso::
:meth:`.Operations.drop_table_comment`
:paramref:`.Operations.alter_column.comment`
"""
def create_unique_constraint(
constraint_name: Optional[str],
table_name: str,
columns: Sequence[str],
*,
schema: Optional[str] = None,
**kw: Any,
) -> Any:
"""Issue a "create unique constraint" instruction using the
current migration context.
e.g.::
from alembic import op
op.create_unique_constraint("uq_user_name", "user", ["name"])
This internally generates a :class:`~sqlalchemy.schema.Table` object
containing the necessary columns, then generates a new
:class:`~sqlalchemy.schema.UniqueConstraint`
object which it then associates with the
:class:`~sqlalchemy.schema.Table`.
Any event listeners associated with this action will be fired
off normally. The :class:`~sqlalchemy.schema.AddConstraint`
construct is ultimately used to generate the ALTER statement.
:param name: Name of the unique constraint. The name is necessary
so that an ALTER statement can be emitted. For setups that
use an automated naming scheme such as that described at
:ref:`sqla:constraint_naming_conventions`,
``name`` here can be ``None``, as the event listener will
apply the name to the constraint object when it is associated
with the table.
:param table_name: String name of the source table.
:param columns: a list of string column names in the
source table.
:param deferrable: optional bool. If set, emit DEFERRABLE or
NOT DEFERRABLE when issuing DDL for this constraint.
:param initially: optional string. If set, emit INITIALLY <value>
when issuing DDL for this constraint.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def drop_column(
table_name: str,
column_name: str,
*,
schema: Optional[str] = None,
**kw: Any,
) -> None:
"""Issue a "drop column" instruction using the current
migration context.
e.g.::
drop_column("organization", "account_id")
:param table_name: name of table
:param column_name: name of column
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param mssql_drop_check: Optional boolean. When ``True``, on
Microsoft SQL Server only, first
drop the CHECK constraint on the column using a
SQL-script-compatible
block that selects into a @variable from sys.check_constraints,
then exec's a separate DROP CONSTRAINT for that constraint.
:param mssql_drop_default: Optional boolean. When ``True``, on
Microsoft SQL Server only, first
drop the DEFAULT constraint on the column using a
SQL-script-compatible
block that selects into a @variable from sys.default_constraints,
then exec's a separate DROP CONSTRAINT for that default.
:param mssql_drop_foreign_key: Optional boolean. When ``True``, on
Microsoft SQL Server only, first
drop a single FOREIGN KEY constraint on the column using a
SQL-script-compatible
block that selects into a @variable from
sys.foreign_keys/sys.foreign_key_columns,
then exec's a separate DROP CONSTRAINT for that default. Only
works if the column has exactly one FK constraint which refers to
it, at the moment.
"""
def drop_constraint(
constraint_name: str,
table_name: str,
type_: Optional[str] = None,
*,
schema: Optional[str] = None,
) -> None:
r"""Drop a constraint of the given name, typically via DROP CONSTRAINT.
:param constraint_name: name of the constraint.
:param table_name: table name.
:param type\_: optional, required on MySQL. can be
'foreignkey', 'primary', 'unique', or 'check'.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def drop_index(
index_name: str,
table_name: Optional[str] = None,
*,
schema: Optional[str] = None,
if_exists: Optional[bool] = None,
**kw: Any,
) -> None:
r"""Issue a "drop index" instruction using the current
migration context.
e.g.::
drop_index("accounts")
:param index_name: name of the index.
:param table_name: name of the owning table. Some
backends such as Microsoft SQL Server require this.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param if_exists: If True, adds IF EXISTS operator when
dropping the index.
.. versionadded:: 1.12.0
:param \**kw: Additional keyword arguments not mentioned above are
dialect specific, and passed in the form
``<dialectname>_<argname>``.
See the documentation regarding an individual dialect at
:ref:`dialect_toplevel` for detail on documented arguments.
"""
def drop_table(
table_name: str, *, schema: Optional[str] = None, **kw: Any
) -> None:
r"""Issue a "drop table" instruction using the current
migration context.
e.g.::
drop_table("accounts")
:param table_name: Name of the table
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
:param \**kw: Other keyword arguments are passed to the underlying
:class:`sqlalchemy.schema.Table` object created for the command.
"""
def drop_table_comment(
table_name: str,
*,
existing_comment: Optional[str] = None,
schema: Optional[str] = None,
) -> None:
"""Issue a "drop table comment" operation to
remove an existing comment set on a table.
:param table_name: string name of the target table.
:param existing_comment: An optional string value of a comment already
registered on the specified table.
.. seealso::
:meth:`.Operations.create_table_comment`
:paramref:`.Operations.alter_column.comment`
"""
def execute(
sqltext: Union[str, TextClause, Update],
*,
execution_options: Optional[dict[str, Any]] = None,
) -> None:
r"""Execute the given SQL using the current migration context.
The given SQL can be a plain string, e.g.::
op.execute("INSERT INTO table (foo) VALUES ('some value')")
Or it can be any kind of Core SQL Expression construct, such as
below where we use an update construct::
from sqlalchemy.sql import table, column
from sqlalchemy import String
from alembic import op
account = table("account", column("name", String))
op.execute(
account.update()
.where(account.c.name == op.inline_literal("account 1"))
.values({"name": op.inline_literal("account 2")})
)
Above, we made use of the SQLAlchemy
:func:`sqlalchemy.sql.expression.table` and
:func:`sqlalchemy.sql.expression.column` constructs to make a brief,
ad-hoc table construct just for our UPDATE statement. A full
:class:`~sqlalchemy.schema.Table` construct of course works perfectly
fine as well, though note it's a recommended practice to at least
ensure the definition of a table is self-contained within the migration
script, rather than imported from a module that may break compatibility
with older migrations.
In a SQL script context, the statement is emitted directly to the
output stream. There is *no* return result, however, as this
function is oriented towards generating a change script
that can run in "offline" mode. Additionally, parameterized
statements are discouraged here, as they *will not work* in offline
mode. Above, we use :meth:`.inline_literal` where parameters are
to be used.
For full interaction with a connected database where parameters can
also be used normally, use the "bind" available from the context::
from alembic import op
connection = op.get_bind()
connection.execute(
account.update()
.where(account.c.name == "account 1")
.values({"name": "account 2"})
)
Additionally, when passing the statement as a plain string, it is first
coerced into a :func:`sqlalchemy.sql.expression.text` construct
before being passed along. In the less likely case that the
literal SQL string contains a colon, it must be escaped with a
backslash, as::
op.execute(r"INSERT INTO table (foo) VALUES ('\:colon_value')")
:param sqltext: Any legal SQLAlchemy expression, including:
* a string
* a :func:`sqlalchemy.sql.expression.text` construct.
* a :func:`sqlalchemy.sql.expression.insert` construct.
* a :func:`sqlalchemy.sql.expression.update`,
:func:`sqlalchemy.sql.expression.insert`,
or :func:`sqlalchemy.sql.expression.delete` construct.
* Any "executable" described in SQLAlchemy Core documentation,
noting that no result set is returned.
.. note:: when passing a plain string, the statement is coerced into
a :func:`sqlalchemy.sql.expression.text` construct. This construct
considers symbols with colons, e.g. ``:foo`` to be bound parameters.
To avoid this, ensure that colon symbols are escaped, e.g.
``\:foo``.
:param execution_options: Optional dictionary of
execution options, will be passed to
:meth:`sqlalchemy.engine.Connection.execution_options`.
"""
def f(name: str) -> conv:
"""Indicate a string name that has already had a naming convention
applied to it.
This feature combines with the SQLAlchemy ``naming_convention`` feature
to disambiguate constraint names that have already had naming
conventions applied to them, versus those that have not. This is
necessary in the case that the ``"%(constraint_name)s"`` token
is used within a naming convention, so that it can be identified
that this particular name should remain fixed.
If the :meth:`.Operations.f` is used on a constraint, the naming
convention will not take effect::
op.add_column("t", "x", Boolean(name=op.f("ck_bool_t_x")))
Above, the CHECK constraint generated will have the name
``ck_bool_t_x`` regardless of whether or not a naming convention is
in use.
Alternatively, if a naming convention is in use, and 'f' is not used,
names will be converted along conventions. If the ``target_metadata``
contains the naming convention
``{"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}``, then the
output of the following:
op.add_column("t", "x", Boolean(name="x"))
will be::
CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))
The function is rendered in the output of autogenerate when
a particular constraint name is already converted.
"""
def get_bind() -> Connection:
"""Return the current 'bind'.
Under normal circumstances, this is the
:class:`~sqlalchemy.engine.Connection` currently being used
to emit SQL to the database.
In a SQL script context, this value is ``None``. [TODO: verify this]
"""
def get_context() -> MigrationContext:
"""Return the :class:`.MigrationContext` object that's
currently in use.
"""
def implementation_for(op_cls: Any) -> Callable[..., Any]:
"""Register an implementation for a given :class:`.MigrateOperation`.
This is part of the operation extensibility API.
.. seealso::
:ref:`operation_plugins` - example of use
"""
def inline_literal(
value: Union[str, int], type_: Optional[TypeEngine] = None
) -> _literal_bindparam:
r"""Produce an 'inline literal' expression, suitable for
using in an INSERT, UPDATE, or DELETE statement.
When using Alembic in "offline" mode, CRUD operations
aren't compatible with SQLAlchemy's default behavior surrounding
literal values,
which is that they are converted into bound values and passed
separately into the ``execute()`` method of the DBAPI cursor.
An offline SQL
script needs to have these rendered inline. While it should
always be noted that inline literal values are an **enormous**
security hole in an application that handles untrusted input,
a schema migration is not run in this context, so
literals are safe to render inline, with the caveat that
advanced types like dates may not be supported directly
by SQLAlchemy.
See :meth:`.Operations.execute` for an example usage of
:meth:`.Operations.inline_literal`.
The environment can also be configured to attempt to render
"literal" values inline automatically, for those simple types
that are supported by the dialect; see
:paramref:`.EnvironmentContext.configure.literal_binds` for this
more recently added feature.
:param value: The value to render. Strings, integers, and simple
numerics should be supported. Other types like boolean,
dates, etc. may or may not be supported yet by various
backends.
:param type\_: optional - a :class:`sqlalchemy.types.TypeEngine`
subclass stating the type of this value. In SQLAlchemy
expressions, this is usually derived automatically
from the Python type of the value itself, as well as
based on the context in which the value is used.
.. seealso::
:paramref:`.EnvironmentContext.configure.literal_binds`
"""
def invoke(operation: MigrateOperation) -> Any:
"""Given a :class:`.MigrateOperation`, invoke it in terms of
this :class:`.Operations` instance.
"""
def register_operation(
name: str, sourcename: Optional[str] = None
) -> Callable[[_T], _T]:
"""Register a new operation for this class.
This method is normally used to add new operations
to the :class:`.Operations` class, and possibly the
:class:`.BatchOperations` class as well. All Alembic migration
operations are implemented via this system, however the system
is also available as a public API to facilitate adding custom
operations.
.. seealso::
:ref:`operation_plugins`
"""
def rename_table(
old_table_name: str, new_table_name: str, *, schema: Optional[str] = None
) -> None:
"""Emit an ALTER TABLE to rename a table.
:param old_table_name: old name.
:param new_table_name: new name.
:param schema: Optional schema name to operate within. To control
quoting of the schema outside of the default behavior, use
the SQLAlchemy construct
:class:`~sqlalchemy.sql.elements.quoted_name`.
"""
def run_async(
async_function: Callable[..., Awaitable[_T]], *args: Any, **kw_args: Any
) -> _T:
"""Invoke the given asynchronous callable, passing an asynchronous
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` as the first
argument.
This method allows calling async functions from within the
synchronous ``upgrade()`` or ``downgrade()`` alembic migration
method.
The async connection passed to the callable shares the same
transaction as the connection running in the migration context.
Any additional arg or kw_arg passed to this function are passed
to the provided async function.
.. versionadded: 1.11
.. note::
This method can be called only when alembic is called using
an async dialect.
"""
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | brand | jsoref | 0 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commmands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up muliple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix addiionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explcitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one Postgreql generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will genenerate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
|
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up multiple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix additionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explicitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one PostgreSQL generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will generate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | brand | jsoref | 1 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commmands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up muliple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix addiionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explcitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one Postgreql generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will genenerate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
|
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up multiple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix additionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explicitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one PostgreSQL generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will generate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | some projects don't like changing changelogs, but here it seems it could be valuable to fix | jsoref | 2 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commmands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up muliple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix addiionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explcitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one Postgreql generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will genenerate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
|
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up multiple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix additionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explicitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one PostgreSQL generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will generate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | Should be ok | CaselIT | 3 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commmands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up muliple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix addiionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explcitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one Postgreql generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will genenerate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
|
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up multiple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix additionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explicitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one PostgreSQL generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will generate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | Here also it seems better with - | CaselIT | 4 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | docs/build/changelog.rst |
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostrgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commmands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up muliple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix addiionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explcitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one Postgreql generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will genenerate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
|
==========
Changelog
==========
.. changelog::
:version: 1.12.1
:include_notes_from: unreleased
.. changelog::
:version: 1.12.0
:released: August 31, 2023
.. change::
:tags: bug, operations
:tickets: 1300
Added support for ``op.drop_constraint()`` to support PostgreSQL
``ExcludeConstraint`` objects, as well as other constraint-like objects
that may be present in third party dialects, by resolving the ``type_``
parameter to be ``None`` for this case. Autogenerate has also been
enhanced to exclude the ``type_`` parameter from rendering within this
command when ``type_`` is ``None``. Pull request courtesy David Hills.
.. change::
:tags: bug, commands
:tickets: 1299
Fixed issue where the ``revision_environment`` directive in ``alembic.ini``
was ignored by the ``alembic merge`` command, leading to issues when other
configurational elements depend upon ``env.py`` being invoked within the
command.
.. change::
:tags: bug, autogenerate
:tickets: 1302
Fixed issue where the ``ForeignKeyConstraint.match`` parameter would not be
rendered in autogenerated migrations. Pull request courtesy Asib
Kamalsada.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Change the default value of
:paramref:`.EnvironmentContext.configure.compare_type` to ``True``.
As Alembic's autogenerate for types was dramatically improved in
version 1.4 released in 2020, the type comparison feature is now much
more reliable so is now enabled by default.
.. change::
:tags: feature, autogenerate
:tickets: 1275
Added new feature to the "code formatter" function which allows standalone
executable tools to be run against code, without going through the Python
interpreter. Known as the ``exec`` runner, it complements the existing
``console_scripts`` runner by allowing non-Python tools such as ``ruff`` to
be used. Pull request courtesy Mihail Milushev.
.. seealso::
:ref:`post_write_hooks_config`
.. changelog::
:version: 1.11.3
:released: August 16, 2023
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 1270
Improved autogenerate compare of expression based indexes on PostgreSQL
to produce fewer wrong detections.
.. change::
:tags: bug, autogenerate
:tickets: 1291
Fixed issue with ``NULLS NOT DISTINCT`` detection in postgresql that
would keep detecting changes in the index or unique constraint.
.. change::
:tags: bug, commands
:tickets: 1273
Added ``encoding="locale"`` setting to the use of Python's
``ConfigParser.read()``, so that a warning is not generated when using the
recently added Python feature ``PYTHONWARNDEFAULTENCODING`` specified in
:pep:`597`. The encoding is passed as the ``"locale"`` string under Python
3.10 and greater, which indicates that the system-level locale should be
used, as was the case already here. Pull request courtesy Kevin Kirsche.
.. changelog::
:version: 1.11.2
:released: August 4, 2023
.. change::
:tags: usecase, typing
:tickets: 1253
Added typing to the default script mako templates.
.. change::
:tags: usecase, autogenerate
:tickets: 1248
Added support in autogenerate for ``NULLS NOT DISTINCT`` in
the PostgreSQL dialect.
.. change::
:tags: bug
:tickets: 1261
Fixed format string logged when running a post write hook
Pull request curtesy of Mathieu Défosse.
.. change::
:tags: feature, operations
:tickets: 151
Added parameters if_exists and if_not_exists for index operations.
Pull request courtesy of Max Adrian.
.. changelog::
:version: 1.11.1
:released: May 17, 2023
.. change::
:tags: bug, autogenerate, regression
:tickets: 1243, 1245
As Alembic 1.11.0 is considered a major release (Alembic does not use
semver, nor does its parent project SQLAlchemy; this has been
:ref:`clarified <versioning_scheme>` in the documentation), change
:ticket:`1130` modified calling signatures for most operations to consider
all optional keyword parameters to be keyword-only arguments, to match what
was always documented and generated by autogenerate. However, two of these
changes were identified as possibly problematic without a more formal
deprecation warning being emitted which were the ``table_name`` parameter
to :meth:`.Operations.drop_index`, which was generated positionally by
autogenerate prior to version 0.6.3 released in 2014, and ``type_`` in
:meth:`.Operations.drop_constraint` and
:meth:`.BatchOperations.drop_constraint`, which was documented positionally
in one example in the batch documentation.
These two signatures have been
restored to allow those particular parameters to be passed positionally. A
future change will include formal deprecation paths (with warnings) for
these arguments where they will again become keyword-only in a future
"Significant Minor" release.
.. change::
:tags: bug, typing
:tickets: 1246
Fixed typing use of :class:`~sqlalchemy.schema.Column` and other
generic SQLAlchemy classes.
.. change::
:tags: bug, typing, regression
:tickets: 1244
Restored the output type of :meth:`.Config.get_section` to include
``Dict[str, str]`` as a potential return type, which had been changed to
immutable ``Mapping[str, str]``. When a section is returned and the default
is not used, a mutable dictionary is returned.
.. changelog::
:version: 1.11.0
:released: May 15, 2023
.. change::
:tags: bug, batch
:tickets: 1237
Added placeholder classes for :class:`~.sqla.Computed` and
:class:`~.sqla.Identity` when older 1.x SQLAlchemy versions are in use,
namely prior to SQLAlchemy 1.3.11 when the :class:`~.sqla.Computed`
construct was introduced. Previously these were set to None, however this
could cause issues with certain codepaths that were using ``isinstance()``
such as one within "batch mode".
.. change::
:tags: bug, batch
:tickets: 1221
Correctly pass previously ignored arguments ``insert_before`` and
``insert_after`` in ``batch_alter_column``
.. change::
:tags: change, py3k
:tickets: 1130
Argument signatures of Alembic operations now enforce keyword-only
arguments as passed as keyword and not positionally, such as
:paramref:`.Operations.create_table.schema`,
:paramref:`.Operations.add_column.type_`, etc.
.. change::
:tags: bug, postgresql
:tickets: 1230
Fix autogenerate issue with PostgreSQL :class:`.ExcludeConstraint`
that included sqlalchemy functions. The function text was previously
rendered as a plain string without surrounding with ``text()``.
.. change::
:tags: bug, mysql, regression
:tickets: 1240
Fixed regression caused by :ticket:`1166` released in version 1.10.0 which
caused MySQL unique constraints with multiple columns to not compare
correctly within autogenerate, due to different sorting rules on unique
constraints vs. indexes, which in MySQL are shared constructs.
.. change::
:tags: misc
:tickets: 1220
Update code snippets within docstrings to use ``black`` code formatting.
Pull request courtesy of James Addison.
.. change::
:tags: bug, typing
:tickets: 1093
Updated stub generator script to also add stubs method definitions for the
:class:`.Operations` class and the :class:`.BatchOperations` class obtained
from :meth:`.Operations.batch_alter_table`. As part of this change, the
class hierarchy of :class:`.Operations` and :class:`.BatchOperations` has
been rearranged on top of a common base class :class:`.AbstractOperations`
in order to type correctly, as :class:`.BatchOperations` uses different
method signatures for operations than :class:`.Operations`.
.. change::
:tags: bug, typing
Repaired the return signatures for :class:`.Operations` that mostly
return ``None``, and were erroneously referring to ``Optional[Table]``
in many cases.
.. change::
:tags: usecase, commands
:tickets: 1109
Added quiet option to the command line, using the ``-q/--quiet``
option. This flag will prevent alembic from logging anything
to stdout.
.. change::
:tags: bug, autogenerate
:tickets: 1178
Modified the autogenerate implementation for comparing "server default"
values from user-defined metadata to not apply any quoting to the value
before comparing it to the server-reported default, except for within
dialect-specific routines as needed. This change will affect the format of
the server default as passed to the
:paramref:`.EnvironmentContext.configure.compare_server_default` hook, as
well as for third party dialects that implement a custom
``compare_server_default`` hook in their alembic impl, to be passed "as is"
and not including additional quoting. Custom implementations which rely
on this quoting should adjust their approach based on observed formatting.
.. change::
:tags: bug, api, autogenerate
:tickets: 1235
Fixed issue where :func:`.autogenerate.render_python_code` function did not
provide a default value for the ``user_module_prefix`` variable, leading to
``NoneType`` errors when autogenerate structures included user-defined
types. Added new parameter
:paramref:`.autogenerate.render_python_code.user_module_prefix` to allow
this to be set as well as to default to ``None``. Pull request courtesy
tangkikodo.
.. change::
:tags: usecase, asyncio
:tickets: 1231
Added :meth:`.AbstractOperations.run_async` to the operation module to
allow running async functions in the ``upgrade`` or ``downgrade`` migration
function when running alembic using an async dialect. This function will
receive as first argument an
:class:`~sqlalchemy.ext.asyncio.AsyncConnection` sharing the transaction
used in the migration context.
.. changelog::
:version: 1.10.4
:released: April 24, 2023
.. change::
:tags: postgresql, autogenerate, feature
:tickets: 1213
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL sort option, such as ``ASC`` or ``NULLS FIRST``.
The sort options are correctly detected only when defined using the
sqlalchemy modifier functions, such as ``asc()`` or ``nulls_first()``,
or the equivalent methods.
Passing sort options inside the ``postgresql_ops`` dict is not supported.
.. change::
:tags: bug, operations
:tickets: 1215
Fixed issue where using a directive such as ``op.create_foreign_key()`` to
create a self-referential constraint on a single table where the same
column were present on both sides (e.g. within a composite foreign key)
would produce an error under SQLAlchemy 2.0 and a warning under SQLAlchemy
1.4 indicating that a duplicate column were being added to a table.
.. changelog::
:version: 1.10.3
:released: April 5, 2023
.. change::
:tags: bug, typing
:tickets: 1191, 1201
Fixed various typing issues observed with pyright, including issues
involving the combination of :class:`.Function` and
:meth:`.MigrationContext.begin_transaction`.
.. change::
:tags: bug, autogenerate
:tickets: 1212
Fixed error raised by alembic when running autogenerate after removing
a function based index.
.. changelog::
:version: 1.10.2
:released: March 8, 2023
.. change::
:tags: bug, ops
:tickets: 1196
Fixed regression where Alembic would not run with older SQLAlchemy 1.3
versions prior to 1.3.24 due to a missing symbol. Workarounds have been
applied for older 1.3 versions.
.. changelog::
:version: 1.10.1
:released: March 6, 2023
.. change::
:tags: bug, postgresql
:tickets: 1184
Fixed issue regarding PostgreSQL :class:`.ExcludeConstraint`, where
constraint elements which made use of :func:`.literal_column` could not be
rendered for autogenerate. Additionally, using SQLAlchemy 2.0.5 or greater,
:func:`.text()` constructs are also supported within PostgreSQL
:class:`.ExcludeConstraint` objects for autogenerate render. Pull request
courtesy Jan Katins.
.. change::
:tags: bug, batch, regression
:tickets: 1195
Fixed regression for 1.10.0 where :class:`.Constraint` objects were
suddenly required to have non-None name fields when using batch mode, which
was not previously a requirement.
.. changelog::
:version: 1.10.0
:released: March 5, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1166
Fixed issue in index detection where autogenerate change detection would
consider indexes with the same columns but with different order as equal,
while in general they are not equivalent in how a database will use them.
.. change::
:tags: feature, revisioning
:tickets: 760
Recursive traversal of revision files in a particular revision directory is
now supported, by indicating ``recursive_version_locations = true`` in
alembic.ini. Pull request courtesy ostr00000.
.. change::
:tags: bug, autogenerate, sqlite
:tickets: 1165
Fixed issue where indexes on SQLite which include SQL expressions would not
compare correctly, generating false positives under autogenerate. These
indexes are now skipped, generating a warning, in the same way that
expression-based indexes on PostgreSQL are skipped and generate warnings
when SQLAlchemy 1.x installations are in use. Note that reflection of
SQLite expression-based indexes continues to not yet be supported under
SQLAlchemy 2.0, even though PostgreSQL expression-based indexes have now
been implemented.
.. change::
:tags: bug, mssql
:tickets: 1187
Properly escape constraint name on SQL Server when dropping
a column while specifying ``mssql_drop_default=True`` or
``mssql_drop_check=True`` or ``mssql_drop_foreign_key=True``.
.. change::
:tags: usecase, autogenerate, postgresql
Added support for autogenerate comparison of indexes on PostgreSQL which
include SQL expressions, when using SQLAlchemy 2.0; the previous warning
that such indexes were skipped are removed when the new functionality
is in use. When using SQLAlchemy versions prior to the 2.0 series,
the indexes continue to be skipped with a warning.
.. changelog::
:version: 1.9.4
:released: February 16, 2023
.. change::
:tags: bug, mssql
:tickets: 1177
Ongoing fixes for SQL Server server default comparisons under autogenerate,
adjusting for SQL Server's collapsing of whitespace between SQL function
arguments when reporting on a function-based server default, as well as its
arbitrary addition of parenthesis within arguments; the approach has now
been made more aggressive by stripping the two default strings to compare
of all whitespace, parenthesis, and quoting characters.
.. change::
:tags: bug, postgresql
Fixed PostgreSQL server default comparison to handle SQL expressions
sent as ``text()`` constructs, such as ``text("substring('name', 1, 3)")``,
which previously would raise errors when attempting to run a server-based
comparison.
.. change::
:tags: bug, autogenerate
:tickets: 1180
Removed a mis-use of the
:paramref:`.EnvironmentContext.configure.render_item` callable where the
"server_default" renderer would be erroneously used within the server
default comparison process, which is working against SQL expressions, not
Python code.
.. change::
:tags: bug, commands
Fixed regression introduced in 1.7.0 where the "config" object passed to
the template context when running the :func:`.merge` command
programmatically failed to be correctly populated. Pull request courtesy
Brendan Gann.
.. changelog::
:version: 1.9.3
:released: February 7, 2023
.. change::
:tags: bug, autogenerate
:tickets: 1167
Fixed issue where rendering of user-defined types that then went onto use
the ``.with_variant()`` method would fail to render, if using SQLAlchemy
2.0's version of variants.
.. changelog::
:version: 1.9.2
:released: January 14, 2023
.. change::
:tags: bug, typing
:tickets: 1146, 1147
Fixed typing definitions for :meth:`.EnvironmentContext.get_x_argument`.
Typing stubs are now generated for overloaded proxied methods such as
:meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug, autogenerate
:tickets: 1152
Fixed regression caused by :ticket:`1145` where the string transformations
applied to server defaults caused expressions such as ``(getdate())`` to no
longer compare as equivalent on SQL Server, others.
.. changelog::
:version: 1.9.1
:released: December 23, 2022
.. change::
:tags: bug, autogenerate
:tickets: 1145
Fixed issue where server default compare would not work for string defaults
that contained backslashes, due to mis-rendering of these values when
comparing their contents.
.. change::
:tags: bug, oracle
Implemented basic server default comparison for the Oracle backend;
previously, Oracle's formatting of reflected defaults prevented any
matches from occurring.
.. change::
:tags: bug, sqlite
Adjusted SQLite's compare server default implementation to better handle
defaults with or without parens around them, from both the reflected and
the local metadata side.
.. change::
:tags: bug, mssql
Adjusted SQL Server's compare server default implementation to better
handle defaults with or without parens around them, from both the reflected
and the local metadata side.
.. changelog::
:version: 1.9.0
:released: December 15, 2022
.. change::
:tags: feature, commands
:tickets: 724
Added new Alembic command ``alembic check``. This performs the widely
requested feature of running an "autogenerate" comparison between the
current database and the :class:`.MetaData` that's currently set up for
autogenerate, returning an error code if the two do not match, based on
current autogenerate settings. Pull request courtesy Nathan Louie.
.. seealso::
:ref:`alembic_check`
.. change::
:tags: bug, tests
Fixed issue in tox.ini file where changes in the tox 4.0 series to the
format of "passenv" caused tox to not function correctly, in particular
raising an error as of tox 4.0.6.
.. change::
:tags: bug, typing
:tickets: 1110
Fixed typing issue where :paramref:`.revision.process_revision_directives`
was not fully typed; additionally ensured all ``Callable`` and ``Dict``
arguments to :meth:`.EnvironmentContext.configure` include parameters in
the typing declaration.
Additionally updated the codebase for Mypy 0.990 compliance.
.. changelog::
:version: 1.8.1
:released: July 13, 2022
.. change::
:tags: bug, sqlite
:tickets: 1065
Fixed bug where the SQLite implementation of
:meth:`.Operations.rename_table` would render an explicit schema name for
both the old and new table name, which while is the standard ALTER syntax,
is not accepted by SQLite's syntax which doesn't support a rename across
schemas. In particular, the syntax issue would prevent batch mode from
working for SQLite databases that made use of attached databases (which are
treated as "schemas" in SQLAlchemy).
.. change::
:tags: bug, batch
:tickets: 1021
Added an error raise for the condition where
:meth:`.Operations.batch_alter_table` is used in ``--sql`` mode, where the
operation requires table reflection, as is the case when running against
SQLite without giving it a fixed ``Table`` object. Previously the operation
would fail with an internal error. To get a "move and copy" batch
operation as a SQL script without connecting to a database,
a ``Table`` object should be passed to the
:paramref:`.Operations.batch_alter_table.copy_from` parameter so that
reflection may be skipped.
.. changelog::
:version: 1.8.0
:released: May 31, 2022
.. change::
:tags: feature, typing
:tickets: 764
:pep:`484` typing annotations have been added to the ``env.py`` and
revision template files within migration templates. Pull request by Nikita
Sobolev.
.. change::
:tags: usecase, operations
:tickets: 1037
The ``op.drop_table()`` operation directive will now trigger the
``before_drop()`` and ``after_drop()`` DDL event hooks at the table level,
which is similar to how the ``before_create()`` and ``after_create()``
hooks are triggered by the ``op.create_table()`` directive. Note that as
``op.drop_table()`` accepts only a table name and optional schema name, the
``Table`` object received by the event will not have any information within
it other than the table name and schema name.
.. change::
:tags: installation, changed
:tickets: 1025
Alembic 1.8 now supports Python 3.7 and above.
.. change::
:tags: changed, environment
:tickets: 987
The "Pylons" environment template has been removed as of Alembic 1.8. This
template was based on the very old pre-Pyramid Pylons web framework which
has been long superseded by Pyramid.
.. change::
:tags: bug, revisioning
:tickets: 1026
Fixed issue where a downgrade using a relative revision would
fail in case of multiple branches with a single effectively
head due to interdependencies between revisions.
.. change::
:tags: usecase, commands
:tickets: 1027
Added new token ``epoch`` to the ``file_template`` option, which will
populate the integer epoch as determined by ``int(create_date.timestamp())``.
Pull request courtesy Caio Carvalho.
.. change::
:tags: bug, batch
:tickets: 1034
Fixed issue in batch mode where CREATE INDEX would not use a new column
name in the case of a column rename.
.. changelog::
:version: 1.7.7
:released: March 14, 2022
.. change::
:tags: bug, operations
:tickets: 1004
Fixed issue where using :meth:`.Operations.create_table` in conjunction
with a :class:`.CheckConstraint` that referred to table-bound
:class:`.Column` objects rather than string expressions would be added to
the parent table potentially multiple times, resulting in an incorrect DDL
sequence. Pull request courtesy Nicolas CANIART.
.. change::
:tags: bug, environment
:tickets: 986
The ``logging.fileConfig()`` line in ``env.py`` templates, which is used
to setup Python logging for the migration run, is now conditional on
:attr:`.Config.config_file_name` not being ``None``. Otherwise, the line
is skipped as there is no default logging configuration present.
.. change::
:tags: bug, mssql
:tickets: 977
Fixed bug where an :meth:`.Operations.alter_column` operation would change
a "NOT NULL" column to "NULL" by emitting an ALTER COLUMN statement that
did not specify "NOT NULL". (In the absence of "NOT NULL" T-SQL was
implicitly assuming "NULL"). An :meth:`.Operations.alter_column` operation
that specifies :paramref:`.Operations.alter_column.type` should also
specify include either :paramref:`.Operations.alter_column.nullable` or
:paramref:`.Operations.alter_column.existing_nullable` to inform Alembic as
to whether the emitted DDL should include "NULL" or "NOT NULL"; a warning
is now emitted if this is missing under this scenario.
.. changelog::
:version: 1.7.6
:released: February 1, 2022
.. change::
:tags: bug, batch, regression
:tickets: 982
Fixed regression where usage of a ``with_variant()`` datatype in
conjunction with the ``existing_type`` option of ``op.alter_column()``
under batch mode would lead to an internal exception.
.. change::
:tags: usecase, commands
:tickets: 964
Add a new command ``alembic ensure_version``, which will ensure that the
Alembic version table is present in the target database, but does not
alter its contents. Pull request courtesy Kai Mueller.
.. change::
:tags: bug, autogenerate
Implemented support for recognizing and rendering SQLAlchemy "variant"
types going forward into SQLAlchemy 2.0, where the architecture of
"variant" datatypes will be changing.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 968
Added a rule to the MySQL impl so that the translation between JSON /
LONGTEXT is accommodated by autogenerate, treating LONGTEXT from the server
as equivalent to an existing JSON in the model.
.. change::
:tags: mssql
Removed a warning raised by SQLAlchemy when dropping constraints
on MSSQL regarding statement caching.
.. changelog::
:version: 1.7.5
:released: November 11, 2021
.. change::
:tags: bug, tests
Adjustments to the test suite to accommodate for error message changes
occurring as of SQLAlchemy 1.4.27.
.. changelog::
:version: 1.7.4
:released: October 6, 2021
.. change::
:tags: bug, regression
:tickets: 934
Fixed a regression that prevented the use of post write hooks
on python version lower than 3.9
.. change::
:tags: bug, environment
:tickets: 944
Fixed issue where the :meth:`.MigrationContext.autocommit_block` feature
would fail to function when using a SQLAlchemy engine using 2.0 future
mode.
.. changelog::
:version: 1.7.3
:released: September 17, 2021
.. change::
:tags: bug, mypy
:tickets: 914
Fixed type annotations for the "constraint_name" argument of operations
``create_primary_key()``, ``create_foreign_key()``. Pull request courtesy
TilmanK.
.. changelog::
:version: 1.7.2
:released: September 17, 2021
.. change::
:tags: bug, typing
:tickets: 900
Added missing attributes from context stubs.
.. change::
:tags: bug, mypy
:tickets: 897
Fixed an import in one of the .pyi files that was triggering an
assertion error in some versions of mypy.
.. change::
:tags: bug, regression, ops
:tickets: 920
Fixed issue where registration of custom ops was prone to failure due to
the registration process running ``exec()`` on generated code that as of
the 1.7 series includes pep-484 annotations, which in the case of end user
code would result in name resolution errors when the exec occurs. The logic
in question has been altered so that the annotations are rendered as
forward references so that the ``exec()`` can proceed.
.. changelog::
:version: 1.7.1
:released: August 30, 2021
.. change::
:tags: bug, installation
:tickets: 893
Corrected "universal wheel" directive in setup.cfg so that building a wheel
does not target Python 2. The PyPi files index for 1.7.0 was corrected
manually. Pull request courtesy layday.
.. change::
:tags: bug, pep484
:tickets: 895
Fixed issue in generated .pyi files where default values for ``Optional``
arguments were missing, thereby causing mypy to consider them as required.
.. change::
:tags: bug, regression, batch
:tickets: 896
Fixed regression in batch mode due to :ticket:`883` where the "auto" mode
of batch would fail to accommodate any additional migration directives
beyond encountering an ``add_column()`` directive, due to a mis-application
of the conditional logic that was added as part of this change, leading to
"recreate" mode not being used in cases where it is required for SQLite
such as for unique constraints.
.. changelog::
:version: 1.7.0
:released: August 30, 2021
.. change::
:tags: bug, operations
:tickets: 879
Fixed regression due to :ticket:`803` where the ``.info`` and ``.comment``
attributes of ``Table`` would be lost inside of the :class:`.DropTableOp`
class, which when "reversed" into a :class:`.CreateTableOp` would then have
lost these elements. Pull request courtesy Nicolas CANIART.
.. change::
:tags: feature, environment
:tickets: 842
Enhance ``version_locations`` parsing to handle paths containing spaces.
The new configuration option ``version_path_separator`` specifies the
character to use when splitting the ``version_locations`` string. The
default for new configurations is ``version_path_separator = os``,
which will use ``os.pathsep`` (e.g., ``;`` on Windows).
.. change::
:tags: installation, changed
Alembic 1.7 now supports Python 3.6 and above; support for prior versions
including Python 2.7 has been dropped.
.. change::
:tags: bug, sqlite, batch
:tickets: 883
Batch "auto" mode will now select for "recreate" if the ``add_column()``
operation is used on SQLite, and the column itself meets the criteria for
SQLite where ADD COLUMN is not allowed, in this case a functional or
parenthesized SQL expression or a ``Computed`` (i.e. generated) column.
.. change::
:tags: changed, installation
:tickets: 674
Make the ``python-dateutil`` library an optional dependency.
This library is only required if the ``timezone`` option
is used in the Alembic configuration.
An extra require named ``tz`` is available with
``pip install alembic[tz]`` to install it.
.. change::
:tags: bug, commands
:tickets: 856
Re-implemented the ``python-editor`` dependency as a small internal
function to avoid the need for external dependencies.
.. change::
:tags: usecase, batch
:tickets: 884
Named CHECK constraints are now supported by batch mode, and will
automatically be part of the recreated table assuming they are named. They
also can be explicitly dropped using ``op.drop_constraint()``. For
"unnamed" CHECK constraints, these are still skipped as they cannot be
distinguished from the CHECK constraints that are generated by the
``Boolean`` and ``Enum`` datatypes.
Note that this change may require adjustments to migrations that drop or
rename columns which feature an associated named check constraint, such
that an additional ``op.drop_constraint()`` directive should be added for
that named constraint as there will no longer be an associated column
for it; for the ``Boolean`` and ``Enum`` datatypes, an ``existing_type``
keyword may be passed to ``BatchOperations.drop_constraint`` as well.
.. seealso::
:ref:`batch_schematype_constraints`
:ref:`batch_check_constraints`
.. change::
:tags: changed, installation
:tickets: 885
The dependency on ``pkg_resources`` which is part of ``setuptools`` has
been removed, so there is no longer any runtime dependency on
``setuptools``. The functionality has been replaced with
``importlib.metadata`` and ``importlib.resources`` which are both part of
Python std.lib, or via pypy dependency ``importlib-metadata`` for Python
version < 3.8 and ``importlib-resources`` for Python version < 3.9
(while importlib.resources was added to Python in 3.7, it did not include
the "files" API until 3.9).
.. change::
:tags: feature, tests
:tickets: 855
Created a "test suite" similar to the one for SQLAlchemy, allowing
developers of third-party dialects to test their code against a set of
Alembic tests that have been specially selected to exercise
back-end database operations. At the time of release,
third-party dialects that have adopted the Alembic test suite to verify
compatibility include
`CockroachDB <https://pypi.org/project/sqlalchemy-cockroachdb/>`_ and
`SAP ASE (Sybase) <https://pypi.org/project/sqlalchemy-sybase/>`_.
.. change::
:tags: bug, postgresql
:tickets: 874
Fixed issue where usage of the PostgreSQL ``postgresql_include`` option
within a :meth:`.Operations.create_index` would raise a KeyError, as the
additional column(s) need to be added to the table object used by the
construct internally. The issue is equivalent to the SQL Server issue fixed
in :ticket:`513`. Pull request courtesy Steven Bronson.
.. change::
:tags: feature, general
pep-484 type annotations have been added throughout the library.
Additionally, stub .pyi files have been added for the "dynamically"
generated Alembic modules ``alembic.op`` and ``alembic.config``, which
include complete function signatures and docstrings, so that the functions
in these namespaces will have both IDE support (vscode, pycharm, etc) as
well as support for typing tools like Mypy. The files themselves are
statically generated from their source functions within the source tree.
.. changelog::
:version: 1.6.5
:released: May 27, 2021
.. change::
:tags: bug, autogenerate
:tickets: 849
Fixed issue where dialect-specific keyword arguments within the
:class:`.DropIndex` operation directive would not render in the
autogenerated Python code. As support was improved for adding dialect
specific arguments to directives as part of :ticket:`803`, in particular
arguments such as "postgresql_concurrently" which apply to the actual
create/drop of the index, support was needed for these to render even in a
drop index operation. Pull request courtesy Jet Zhou.
.. changelog::
:version: 1.6.4
:released: May 24, 2021
.. change::
:tags: bug, regression, op directives
:tickets: 848
Fixed regression caused by just fixed :ticket:`844` that scaled back the
filter for ``unique=True/index=True`` too far such that these directives no
longer worked for the ``op.create_table()`` op, this has been fixed.
.. changelog::
:version: 1.6.3
:released: May 21, 2021
.. change::
:tags: bug, regression, autogenerate
:tickets: 844
Fixed 1.6-series regression where ``UniqueConstraint`` and to a lesser
extent ``Index`` objects would be doubled up in the generated model when
the ``unique=True`` / ``index=True`` flags were used.
.. change::
:tags: bug, autogenerate
:tickets: 839
Fixed a bug where paths defined in post-write hook options
would be wrongly escaped in non posix environment (Windows).
.. change::
:tags: bug, regression, versioning
:tickets: 843
Fixed regression where a revision file that contained its own down revision
as a dependency would cause an endless loop in the traversal logic.
.. changelog::
:version: 1.6.2
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 839
Fixed additional regression nearly the same as that of :ticket:`838` just
released in 1.6.1 but within a slightly different codepath, where "alembic
downgrade head" (or equivalent) would fail instead of iterating no
revisions.
.. changelog::
:version: 1.6.1
:released: May 6, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 838
Fixed regression in new revisioning traversal where "alembic downgrade
base" would fail if the database itself were clean and unversioned;
additionally repairs the case where downgrade would fail if attempting
to downgrade to the current head that is already present.
.. changelog::
:version: 1.6.0
:released: May 3, 2021
.. change::
:tags: bug, autogenerate
:tickets: 803
Refactored the implementation of :class:`.MigrateOperation` constructs such
as :class:`.CreateIndexOp`, :class:`.CreateTableOp`, etc. so that they no
longer rely upon maintaining a persistent version of each schema object
internally; instead, the state variables of each operation object will be
used to produce the corresponding construct when the operation is invoked.
The rationale is so that environments which make use of
operation-manipulation schemes such as those discussed in
:ref:`autogen_rewriter` are better supported, allowing end-user code to
manipulate the public attributes of these objects which will then be
expressed in the final output, an example is
``some_create_index_op.kw["postgresql_concurrently"] = True``.
Previously, these objects when generated from autogenerate would typically
hold onto the original, reflected element internally without honoring the
other state variables of each construct, preventing the public API from
working.
.. change::
:tags: bug, environment
:tickets: 829
Fixed regression caused by the SQLAlchemy 1.4/2.0 compatibility switch
where calling ``.rollback()`` or ``.commit()`` explicitly within the
``context.begin_transaction()`` context manager would cause it to fail when
the block ended, as it did not expect that the transaction was manually
closed.
.. change::
:tags: bug, autogenerate
:tickets: 827
Improved the rendering of ``op.add_column()`` operations when adding
multiple columns to an existing table, so that the order of these
statements matches the order in which the columns were declared in the
application's table metadata. Previously the added columns were being
sorted alphabetically.
.. change::
:tags: feature, autogenerate
:tickets: 819
Fix the documentation regarding the default command-line argument position of
the revision script filename within the post-write hook arguments. Implement a
``REVISION_SCRIPT_FILENAME`` token, enabling the position to be changed. Switch
from ``str.split()`` to ``shlex.split()`` for more robust command-line argument
parsing.
.. change::
:tags: feature
:tickets: 822
Implement a ``.cwd`` (current working directory) suboption for post-write hooks
(of type ``console_scripts``). This is useful for tools like pre-commit, which
rely on the working directory to locate the necessary config files. Add
pre-commit as an example to the documentation. Minor change: rename some variables
from ticket #819 to improve readability.
.. change::
:tags: bug, versioning
:tickets: 765, 464
The algorithm used for calculating downgrades/upgrades/iterating
revisions has been rewritten, to resolve ongoing issues of branches
not being handled consistently particularly within downgrade operations,
as well as for overall clarity and maintainability. This change includes
that a deprecation warning is emitted if an ambiguous command such
as "downgrade -1" when multiple heads are present is given.
In particular, the change implements a long-requested use case of allowing
downgrades of a single branch to a branchpoint.
Huge thanks to Simon Bowly for their impressive efforts in successfully
tackling this very difficult problem.
.. change::
:tags: bug, batch
:tickets: 799
Added missing ``batch_op.create_table_comment()``,
``batch_op.drop_table_comment()`` directives to batch ops.
.. changelog::
:version: 1.5.8
:released: March 23, 2021
.. change::
:tags: bug, environment
:tickets: 816
Fixed regression caused by SQLAlchemy 1.4 where the "alembic current"
command would fail due to changes in the ``URL`` object.
.. changelog::
:version: 1.5.7
:released: March 11, 2021
.. change::
:tags: bug, autogenerate
:tickets: 813
Adjusted the recently added
:paramref:`.EnvironmentContext.configure.include_name` hook to accommodate
for additional object types such as "views" that don't have a parent table,
to support third party recipes and extensions. Pull request courtesy Oliver
Rice.
.. changelog::
:version: 1.5.6
:released: March 5, 2021
.. change::
:tags: bug, mssql, operations
:tickets: 812
Fixed bug where the "existing_type" parameter, which the MSSQL dialect
requires in order to change the nullability of a column in the absence of
also changing the column type, would cause an ALTER COLUMN operation to
incorrectly render a second ALTER statement without the nullability if a
new type were also present, as the MSSQL-specific contract did not
anticipate all three of "nullability", ``"type_"`` and "existing_type" being
sent at the same time.
.. change::
:tags: template
:ticket: 805
Add async template to Alembic to bootstrap environments that use
async DBAPI. Updated the cookbook to include a migration guide
on how to adapt an existing environment for use with DBAPI drivers.
.. changelog::
:version: 1.5.5
:released: February 20, 2021
.. change::
:tags: bug
Adjusted the use of SQLAlchemy's ".copy()" internals to use "._copy()"
for version 1.4.0, as this method is being renamed.
.. change::
:tags: bug, environment
:tickets: 797
Added new config file option ``prepend_sys_path``, which is a series of
paths that will be prepended to sys.path; the default value in newly
generated alembic.ini files is ".". This fixes a long-standing issue
where for some reason running the alembic command line would not place the
local "." path in sys.path, meaning an application locally present in "."
and importable through normal channels, e.g. python interpreter, pytest,
etc. would not be located by Alembic, even though the ``env.py`` file is
loaded relative to the current path when ``alembic.ini`` contains a
relative path. To enable for existing installations, add the option to the
alembic.ini file as follows::
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
.. seealso::
:ref:`installation` - updated documentation reflecting that local
installation of the project is not necessary if running the Alembic cli
from the local path.
.. changelog::
:version: 1.5.4
:released: February 3, 2021
.. change::
:tags: bug, versioning
:tickets: 789
Fixed bug in versioning model where a downgrade across a revision with a
dependency on another branch, yet an ancestor is also dependent on that
branch, would produce an erroneous state in the alembic_version table,
making upgrades impossible without manually repairing the table.
.. changelog::
:version: 1.5.3
:released: January 29, 2021
.. change::
:tags: bug, autogenerate
:tickets: 786
Changed the default ordering of "CREATE" and "DROP" statements indexes and
unique constraints within the autogenerate process, so that for example in
an upgrade() operation, a particular index or constraint that is to be
replaced such as for a casing convention change will not produce any naming
conflicts. For foreign key constraint objects, this is already how
constraints are ordered, and for table objects, users would normally want
to use :meth:`.Operations.rename_table` in any case.
.. change::
:tags: bug, autogenerate, mssql
:tickets: 787
Fixed assorted autogenerate issues with SQL Server:
* ignore default reflected identity on primary_key columns
* improve server default comparison
.. change::
:tags: bug, mysql, autogenerate
:tickets: 788
Fixed issue where autogenerate rendering of ``op.alter_column()`` would
fail to include MySQL ``existing_nullable=False`` if the column were part
of a primary key constraint within the table metadata.
.. changelog::
:version: 1.5.2
:released: January 20, 2021
.. change::
:tags: bug, versioning, regression
:tickets: 784
Fixed regression where new "loop detection" feature introduced in
:ticket:`757` produced false positives for revision names that have
overlapping substrings between revision number and down revision and/or
dependency, if the downrev/dependency were not in sequence form.
.. change::
:tags: bug, environment
:tickets: 782
Fixed regression where Alembic would fail to create a transaction properly
if the :class:`sqlalchemy.engine.Connection` were a so-called "branched"
connection, that is, one where the ``.connect()`` method had been called to
create a "sub" connection.
.. changelog::
:version: 1.5.1
:released: January 19, 2021
.. change::
:tags: bug, installation, commands
:tickets: 780
Fixed installation issue where the "templates" directory was not being
installed, preventing commands like "list_templates" and "init" from
working.
.. changelog::
:version: 1.5.0
:released: January 18, 2021
.. change::
:tags: usecase, operations
:tickets: 730
Added support for rendering of "identity" elements on
:class:`.Column` objects, supported in SQLAlchemy via
the :class:`.Identity` element introduced in version 1.4.
Adding columns with identity is supported on PostgreSQL,
MSSQL and Oracle. Changing the identity options or removing
it is supported only on PostgreSQL and Oracle.
.. change::
:tags: changed, environment
To accommodate SQLAlchemy 1.4 and 2.0, the migration model now no longer
assumes that the SQLAlchemy Connection will autocommit an individual
operation. This essentially means that for databases that use
non-transactional DDL (pysqlite current driver behavior, MySQL), there is
still a BEGIN/COMMIT block that will surround each individual migration.
Databases that support transactional DDL should continue to have the
same flow, either per migration or per-entire run, depending on the
value of the :paramref:`.Environment.configure.transaction_per_migration`
flag.
.. change::
:tags: changed, environment
A :class:`.CommandError` is raised if a ``sqlalchemy.engine.Engine`` is
passed to the :meth:`.MigrationContext.configure` method instead of a
``sqlalchemy.engine.Connection`` object. Previously, this would be a
warning only.
.. change::
:tags: bug, operations
:tickets: 753
Modified the ``add_column()`` operation such that the ``Column`` object in
use is shallow copied to a new instance if that ``Column`` is already
attached to a ``table()`` or ``Table``. This accommodates for the change
made in SQLAlchemy issue #5618 which prohibits a ``Column`` from being
associated with multiple ``table()`` objects. This resumes support for
using a ``Column`` inside of an Alembic operation that already refers to a
parent ``table()`` or ``Table`` as well as allows operation objects just
autogenerated to work.
.. change::
:tags: feature, autogenerate
:tickets: 650
Added new hook :paramref:`.EnvironmentContext.configure.include_name`,
which complements the
:paramref:`.EnvironmentContext.configure.include_object` hook by providing
a means of preventing objects of a certain name from being autogenerated
**before** the SQLAlchemy reflection process takes place, and notably
includes explicit support for passing each schema name when
:paramref:`.EnvironmentContext.configure.include_schemas` is set to True.
This is most important especially for environments that make use of
:paramref:`.EnvironmentContext.configure.include_schemas` where schemas are
actually databases (e.g. MySQL) in order to prevent reflection sweeps of
the entire server.
.. seealso::
:ref:`autogenerate_include_hooks` - new documentation section
.. change::
:tags: removed, autogenerate
The long deprecated
:paramref:`.EnvironmentContext.configure.include_symbol` hook is removed.
The :paramref:`.EnvironmentContext.configure.include_object`
and :paramref:`.EnvironmentContext.configure.include_name`
hooks both achieve the goals of this hook.
.. change::
:tags: bug, autogenerate
:tickets: 721
Added rendering for the ``Table.prefixes`` element to autogenerate so that
the rendered Python code includes these directives. Pull request courtesy
Rodrigo Ce Moretto.
.. change::
:tags: bug, batch
:tickets: 761
Added missing "create comment" feature for columns that are altered in
batch migrations.
.. change::
:tags: changed
:tickets: 748
Alembic 1.5.0 now supports **Python 2.7 and Python 3.6 and above**, as well
as **SQLAlchemy 1.3.0 and above**. Support is removed for Python 3
versions prior to 3.6 and SQLAlchemy versions prior to the 1.3 series.
.. change::
:tags: bug, batch
:tickets: 773
Made an adjustment to the PostgreSQL dialect to allow it to work more
effectively in batch mode, where a datatype like Boolean or non-native Enum
that may have embedded rules to generate CHECK constraints will be more
correctly handled in that these constraints usually will not have been
generated on the PostgreSQL backend; previously it would inadvertently
assume they existed unconditionally in a special PG-only "drop constraint"
step.
.. change::
:tags: feature, versioning
:tickets: 757
The revision tree is now checked for cycles and loops between revision
files when the revision environment is loaded up. Scenarios such as a
revision pointing to itself, or a revision that can reach itself via a
loop, are handled and will raise the :class:`.CycleDetected` exception when
the environment is loaded (expressed from the Alembic commandline as a
failure message and nonzero return code). Previously, these situations were
silently ignored up front, and the behavior of revision traversal would
either be silently incorrect, or would produce errors such as
:class:`.RangeNotAncestorError`. Pull request courtesy Koichiro Den.
.. change::
:tags: usecase, commands
Add ``__main__.py`` file to alembic package to support invocation
with ``python -m alembic``.
.. change::
:tags: removed, commands
Removed deprecated ``--head_only`` option to the ``alembic current``
command
.. change::
:tags: removed, operations
Removed legacy parameter names from operations, these have been emitting
warnings since version 0.8. In the case that legacy version files have not
yet been updated, these can be modified directly in order to maintain
compatibility:
* :meth:`.Operations.drop_constraint` - "type" (use ``"type_"``) and "name"
(use "constraint_name")
* :meth:`.Operations.create_primary_key` - "cols" (use "columns") and
"name" (use "constraint_name")
* :meth:`.Operations.create_unique_constraint` - "name" (use
"constraint_name"), "source" (use "table_name") and "local_cols" (use
"columns")
* :meth:`.Operations.batch_create_unique_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_foreign_key` - "name" (use "constraint_name"),
"source" (use "source_table"), "referent" (use "referent_table")
* :meth:`.Operations.batch_create_foreign_key` - "name" (use
"constraint_name"), "referent" (use "referent_table")
* :meth:`.Operations.create_check_constraint` - "name" (use
"constraint_name"), "source" (use "table_name")
* :meth:`.Operations.batch_create_check_constraint` - "name" (use
"constraint_name")
* :meth:`.Operations.create_index` - "name" (use "index_name")
* :meth:`.Operations.drop_index` - "name" (use "index_name"), "tablename"
(use "table_name")
* :meth:`.Operations.batch_drop_index` - "name" (use "index_name"),
* :meth:`.Operations.create_table` - "name" (use "table_name")
* :meth:`.Operations.drop_table` - "name" (use "table_name")
* :meth:`.Operations.alter_column` - "name" (use "new_column_name")
.. changelog::
:version: 1.4.3
:released: September 11, 2020
.. change::
:tags: bug, sqlite, batch
:tickets: 711
Added support to drop named CHECK constraints that are specified as part of
a column, rather than table wide. Previously, only constraints associated
with the table were considered.
.. change::
:tags: bug, ops, mysql
:tickets: 736
Fixed issue where the MySQL dialect would not correctly render the server
default of a column in an alter operation, if the operation were
programmatically generated from an autogenerate pass as it would not
accommodate for the full structure of the DefaultClause construct.
.. change::
:tags: bug, sqlite, batch
:tickets: 697
Fixed issue where the CAST applied to a JSON column when copying a SQLite
table during batch mode would cause the data to be lost, as SQLite's CAST
with JSON appears to convert the data to the value "0". The CAST is now
skipped in a dialect-specific manner, including for JSON columns on SQLite.
Pull request courtesy Sebastián Ramírez.
.. change::
:tags: bug, commands
:tickets: 694
The ``alembic current`` command no longer creates an ``alembic_version``
table in the database if one does not exist already, returning no version
as the current version. This allows checking for migrations in parallel
without introducing race conditions. Pull request courtesy Nikolay
Edigaryev.
.. change::
:tags: bug, batch
Fixed issue where columns in a foreign-key referenced table would be
replaced with null-type columns during a batch operation; while this did
not generally have any side effects, it could theoretically impact a batch
operation that also targets that table directly and also would interfere
with future changes to the ``.append_column()`` method to disallow implicit
replacement of columns.
.. change::
:tags: bug, mssql
:tickets: 716
Fixed issue where the ``mssql_drop_foreign_key=True`` flag on
``op.drop_column`` would lead to incorrect syntax error due to a typo in the
SQL emitted, same typo was present in the test as well so it was not
detected. Pull request courtesy Oleg Shigorin.
.. changelog::
:version: 1.4.2
:released: March 19, 2020
.. change::
:tags: usecase, autogenerate
:tickets: 669
Adjusted autogen comparison to accommodate for backends that support
computed column reflection, dependent on SQLAlchemy version 1.3.16 or
higher. This emits a warning if the SQL expression inside of a
:class:`.Computed` value changes between the metadata and the database, as
these expressions can't be changed without dropping and recreating the
column.
.. change::
:tags: bug, tests
:tickets: 668
Fixed an issue that prevented the test suite from running with the
recently released py.test 5.4.0.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 671
Fixed more false-positive failures produced by the new "compare type" logic
first added in :ticket:`605`, particularly impacting MySQL string types
regarding flags such as "charset" and "collation".
.. change::
:tags: bug, op directives, oracle
:tickets: 670
Fixed issue in Oracle backend where a table RENAME with a schema-qualified
name would include the schema in the "to" portion, which is rejected by
Oracle.
.. changelog::
:version: 1.4.1
:released: March 1, 2020
.. change::
:tags: bug, autogenerate
:tickets: 661
Fixed regression caused by the new "type comparison" logic introduced in
1.4 as part of :ticket:`605` where comparisons of MySQL "unsigned integer"
datatypes would produce false positives, as the regular expression logic
was not correctly parsing the "unsigned" token when MySQL's default display
width would be returned by the database. Pull request courtesy Paul
Becotte.
.. change::
:tags: bug, environment
:tickets: 663
Error message for "path doesn't exist" when loading up script environment
now displays the absolute path. Pull request courtesy Rowan Hart.
.. change::
:tags: bug, autogenerate
:tickets: 654
Fixed regression in 1.4.0 due to :ticket:`647` where unique constraint
comparison with mixed case constraint names while not using a naming
convention would produce false positives during autogenerate.
.. change::
:tags: bug, environment
The check for matched rowcount when the alembic_version table is updated or
deleted from is now conditional based on whether or not the dialect
supports the concept of "rowcount" for UPDATE or DELETE rows matched. Some
third party dialects do not support this concept. Pull request courtesy Ke
Zhu.
.. change::
:tags: bug, operations
:tickets: 655
Fixed long-standing bug where an inline column CHECK constraint would not
be rendered within an "ADD COLUMN" operation. The DDL compiler is now
consulted for inline constraints within the :meth:`.Operations.add_column`
method as is done for regular CREATE TABLE operations.
.. changelog::
:version: 1.4.0
:released: February 4, 2020
.. change::
:tags: change
The internal inspection routines no longer use SQLAlchemy's
``Inspector.from_engine()`` method, which is expected to be deprecated in
1.4. The ``inspect()`` function is now used.
.. change::
:tags: bug, autogenerate
:tickets: 647
Adjusted the unique constraint comparison logic in a similar manner as that
of :ticket:`421` did for indexes in order to take into account SQLAlchemy's
own truncation of long constraint names when a naming convention is in use.
Without this step, a name that is truncated by SQLAlchemy based on a unique
constraint naming convention or hardcoded name will not compare properly.
.. change::
:tags: feature, batch
:tickets: 640
Added new parameters :paramref:`.BatchOperations.add_column.insert_before`,
:paramref:`.BatchOperations.add_column.insert_after` which provide for
establishing the specific position in which a new column should be placed.
Also added :paramref:`.Operations.batch_alter_table.partial_reordering`
which allows the complete set of columns to be reordered when the new table
is created. Both operations apply only to when batch mode is recreating
the whole table using ``recreate="always"``. Thanks to Marcin Szymanski
for assistance with the implementation.
.. change::
:tags: usecase, environment
:tickets: 648
Moved the use of the ``__file__`` attribute at the base of the Alembic
package into the one place that it is specifically needed, which is when
the config attempts to locate the template directory. This helps to allow
Alembic to be fully importable in environments that are using Python
memory-only import schemes. Pull request courtesy layday.
.. change::
:tags: bug, autogenerate
:tickets: 605
A major rework of the "type comparison" logic is in place which changes the
entire approach by which column datatypes are compared. Types are now
compared based on the DDL string generated by the metadata type vs. the
datatype reflected from the database. This means we compare types based on
what would actually render and additionally if elements of the types change
like string length, those changes are detected as well. False positives
like those generated between SQLAlchemy Boolean and MySQL TINYINT should
also be resolved. Thanks very much to Paul Becotte for lots of hard work
and patience on this one.
.. seealso::
:ref:`autogenerate_detects` - updated comments on type comparison
.. changelog::
:version: 1.3.3
:released: January 22, 2020
.. change::
:tags: bug, postgresql
:tickets: 637
Fixed issue where COMMENT directives for PostgreSQL failed to correctly
include an explicit schema name, as well as correct quoting rules for
schema, table, and column names. Pull request courtesy Matthew Sills.
.. change::
:tags: usecase, operations
:tickets: 624
Added support for rendering of "computed" elements on :class:`.Column`
objects, supported in SQLAlchemy via the new :class:`.Computed` element
introduced in version 1.3.11. Pull request courtesy Federico Caselli.
Note that there is currently no support for ALTER COLUMN to add, remove, or
modify the "GENERATED ALWAYS AS" element from a column; at least for
PostgreSQL, it does not seem to be supported by the database. Additionally,
SQLAlchemy does not currently reliably reflect the "GENERATED ALWAYS AS"
phrase from an existing column, so there is also no autogenerate support
for addition or removal of the :class:`.Computed` element to or from an
existing column, there is only support for adding new columns that include
the :class:`.Computed` element. In the case that the :class:`.Computed`
element is removed from the :class:`.Column` object in the table metadata,
PostgreSQL and Oracle currently reflect the "GENERATED ALWAYS AS"
expression as the "server default" which will produce an op that tries to
drop the element as a default.
.. changelog::
:version: 1.3.2
:released: December 16, 2019
.. change::
:tags: bug, api, autogenerate
:tickets: 635
Fixed regression introduced by :ticket:`579` where server default rendering
functions began to require a dialect implementation, however the
:func:`.render_python_code` convenience function did not include one, thus
causing the function to fail when used in a server default context. The
function now accepts a migration context argument and also creates one
against the default dialect if one is not provided.
.. changelog::
:version: 1.3.1
:released: November 13, 2019
.. change::
:tags: bug, mssql
:tickets: 621
Fixed bug in MSSQL dialect where the drop constraint execution steps used
to remove server default or implicit foreign key constraint failed to take
into account the schema name of the target table.
.. changelog::
:version: 1.3.0
:released: October 31, 2019
.. change::
:tags: feature, command
:tickets: 608
Added support for ALEMBIC_CONFIG environment variable,
refers to the location of the alembic configuration script
in lieu of using the -c command line option.
.. change::
:tags: bug, autogenerate
:tickets: 131
Fixed bug in new Variant autogenerate where the order of the arguments to
Variant were mistakenly reversed.
.. change::
:tags: change, compatibility
Some internal modifications have been made to how the names of indexes and
unique constraints work to make use of new functions added in SQLAlchemy
1.4, so that SQLAlchemy has more flexibility over how naming conventions
may be applied to these objects.
.. changelog::
:version: 1.2.1
:released: September 24, 2019
.. change::
:tags: bug, command
:tickets: 601
Reverted the name change of the "revisions" argument to
:func:`.command.stamp` to "revision" as apparently applications are
calling upon this argument as a keyword name. Pull request courtesy
Thomas Bechtold. Special translations are also added to the command
line interface so that it is still known as "revisions" in the CLI.
.. change::
:tags: bug, tests
:tickets: 592
Removed the "test requirements" from "setup.py test", as this command now
only emits a removal error in any case and these requirements are unused.
.. changelog::
:version: 1.2.0
:released: September 20, 2019
.. change::
:tags: feature, command
:tickets: 473
Added new ``--purge`` flag to the ``alembic stamp`` command, which will
unconditionally erase the version table before stamping anything. This is
useful for development where non-existent version identifiers might be left
within the table. Additionally, ``alembic.stamp`` now supports a list of
revision identifiers, which are intended to allow setting up multiple heads
at once. Overall handling of version identifiers within the
``alembic.stamp`` command has been improved with many new tests and
use cases added.
.. change::
:tags: bug, autogenerate
:tickets: 550
Improved the Python rendering of a series of migration operations such that
a single "pass" is rendered for a :class:`.UpgradeOps` or
:class:`.DowngradeOps` based on if no lines of Python code actually
rendered under the operation, rather than whether or not sub-directives
exist. Removed extra "pass" lines that would generate from the
:class:`.ModifyTableOps` directive so that these aren't duplicated under
operation rewriting scenarios.
.. change::
:tags: feature, runtime
:tickets: 123
Added new feature :meth:`.MigrationContext.autocommit_block`, a special
directive which will provide for a non-transactional block inside of a
migration script. The feature requires that: the database driver
(e.g. DBAPI) supports the AUTOCOMMIT isolation mode. The directive
also necessarily needs to COMMIT the existing transaction in progress
in order to enter autocommit mode.
.. seealso::
:meth:`.MigrationContext.autocommit_block`
.. change::
:tags: change: py3k
Python 3.4 support is dropped, as the upstream tooling (pip, mysqlclient)
etc are already dropping support for Python 3.4, which itself is no longer
maintained.
.. change::
:tags: usecase, autogenerate
:tickets: 518
Added autogenerate support for :class:`.Column` objects that have
dialect-specific ``**kwargs``, support first added in SQLAlchemy 1.3.
This includes SQLite "on conflict" as well as options used by some
third party dialects.
.. change::
:tags: usecase, autogenerate
:tickets: 131
Added rendering for SQLAlchemy ``Variant`` datatypes, which render as the
base type plus one or more ``.with_variant()`` method calls.
.. change::
:tags: usecase, commands
:tickets: 534
Made the command interface revision lookup behavior more strict in that an
Alembic revision number is only resolved based on a partial match rules if
it has at least four characters, to prevent simple typographical issues
from inadvertently running migrations.
.. change::
:tags: feature, commands
:tickets: 307
Added "post write hooks" to revision generation. These allow custom logic
to run after a revision Python script is generated, typically for the
purpose of running code formatters such as "Black" or "autopep8", but may
be used for any arbitrary post-render hook as well, including custom Python
functions or scripts. The hooks are enabled by providing a
``[post_write_hooks]`` section in the alembic.ini file. A single hook
is provided which runs an arbitrary Python executable on the newly
generated revision script, which can be configured to run code formatters
such as Black; full examples are included in the documentation.
.. seealso::
:ref:`post_write_hooks`
.. change::
:tags: feature, environment
:tickets: 463
Added new flag ``--package`` to ``alembic init``. For environments where
the Alembic migration files and such are within the package tree and
importable as modules, this flag can be specified which will add the
additional ``__init__.py`` files in the version location and the
environment location.
.. change::
:tags: bug, autogenerate
:tickets: 549
Fixed bug where rendering of comment text for table-level comments within
:meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` was not properly quote-escaped
within rendered Python code for autogenerate.
.. change::
:tags: bug, autogenerate
:tickets: 505
Modified the logic of the :class:`.Rewriter` object such that it keeps a
memoization of which directives it has processed, so that it can ensure it
processes a particular directive only once, and additionally fixed
:class:`.Rewriter` so that it functions correctly for multiple-pass
autogenerate schemes, such as the one illustrated in the "multidb"
template. By tracking which directives have been processed, a
multiple-pass scheme which calls upon the :class:`.Rewriter` multiple times
for the same structure as elements are added can work without running
duplicate operations on the same elements more than once.
.. changelog::
:version: 1.1.0
:released: August 26, 2019
.. change::
:tags: change
Alembic 1.1 bumps the minimum version of SQLAlchemy to 1.1. As was the
case before, Python requirements remain at Python 2.7, or in the 3.x series
Python 3.4.
.. change::
:tags: change, internals
The test suite for Alembic now makes use of SQLAlchemy's testing framework
directly. Previously, Alembic had its own version of this framework that
was mostly copied from that of SQLAlchemy to enable testing with older
SQLAlchemy versions. The majority of this code is now removed so that both
projects can leverage improvements from a common testing framework.
.. change::
:tags: bug, commands
:tickets: 562
Fixed bug where the double-percent logic applied to some dialects such as
psycopg2 would be rendered in ``--sql`` mode, by allowing dialect options
to be passed through to the dialect used to generate SQL and then providing
``paramstyle="named"`` so that percent signs need not be doubled. For
users having this issue, existing env.py scripts need to add
``dialect_opts={"paramstyle": "named"}`` to their offline
context.configure(). See the ``alembic/templates/generic/env.py`` template
for an example.
.. change::
:tags: bug, py3k
Fixed use of the deprecated "imp" module, which is used to detect pep3147
availability as well as to locate .pyc files, which started emitting
deprecation warnings during the test suite. The warnings were not being
emitted earlier during the test suite, the change is possibly due to
changes in py.test itself but this is not clear. The check for pep3147 is
set to True for any Python version 3.5 or greater now and importlib is used
when available. Note that some dependencies such as distutils may still be
emitting this warning. Tests are adjusted to accommodate for dependencies
that emit the warning as well.
.. change::
:tags: bug, mysql
:tickets: 594
Fixed issue where emitting a change of column name for MySQL did not
preserve the column comment, even if it were specified as existing_comment.
.. change::
:tags: bug, setup
:tickets: 592
Removed the "python setup.py test" feature in favor of a straight run of
"tox". Per Pypa / pytest developers, "setup.py" commands are in general
headed towards deprecation in favor of tox. The tox.ini script has been
updated such that running "tox" with no arguments will perform a single run
of the test suite against the default installed Python interpreter.
.. seealso::
https://github.com/pypa/setuptools/issues/1684
https://github.com/pytest-dev/pytest/issues/5534
.. change::
:tags: usecase, commands
:tickets: 571
The "alembic init" command will now proceed if the target directory exists
as long as it's still empty. Previously, it would not proceed if the
directory existed. The new behavior is modeled from what git does, to
accommodate for container or other deployments where an Alembic target
directory may need to be already mounted instead of being created with
alembic init. Pull request courtesy Aviskar KC.
.. changelog::
:version: 1.0.11
:released: June 25, 2019
.. change::
:tags: bug, sqlite, autogenerate, batch
:tickets: 579
SQLite server default reflection will ensure parenthesis are surrounding a
column default expression that is detected as being a non-constant
expression, such as a ``datetime()`` default, to accommodate for the
requirement that SQL expressions have to be parenthesized when being sent
as DDL. Parenthesis are not added to constant expressions to allow for
maximum cross-compatibility with other dialects and existing test suites
(such as Alembic's), which necessarily entails scanning the expression to
eliminate for constant numeric and string values. The logic is added to the
two "reflection->DDL round trip" paths which are currently autogenerate and
batch migration. Within autogenerate, the logic is on the rendering side,
whereas in batch the logic is installed as a column reflection hook.
.. change::
:tags: bug, sqlite, autogenerate
:tickets: 579
Improved SQLite server default comparison to accommodate for a ``text()``
construct that added parenthesis directly vs. a construct that relied
upon the SQLAlchemy SQLite dialect to render the parenthesis, as well
as improved support for various forms of constant expressions such as
values that are quoted vs. non-quoted.
.. change::
:tags: bug, autogenerate
Fixed bug where the "literal_binds" flag was not being set when
autogenerate would create a server default value, meaning server default
comparisons would fail for functions that contained literal values.
.. change::
:tags: bug, mysql
:tickets: 554
Added support for MySQL "DROP CHECK", which is added as of MySQL 8.0.16,
separate from MariaDB's "DROP CONSTRAINT" for CHECK constraints. The MySQL
Alembic implementation now checks for "MariaDB" in server_version_info to
decide which one to use.
.. change::
:tags: bug, mysql, operations
:tickets: 564
Fixed issue where MySQL databases need to use CHANGE COLUMN when altering a
server default of CURRENT_TIMESTAMP, NOW() and probably other functions
that are only usable with DATETIME/TIMESTAMP columns. While MariaDB
supports both CHANGE and ALTER COLUMN in this case, MySQL databases only
support CHANGE. So the new logic is that if the server default change is
against a DateTime-oriented column, the CHANGE format is used
unconditionally, as in the vast majority of cases the server default is to
be CURRENT_TIMESTAMP which may also be potentially bundled with an "ON
UPDATE CURRENT_TIMESTAMP" directive, which SQLAlchemy does not currently
support as a distinct field. The fix additionally improves the server
default comparison logic when the "ON UPDATE" clause is present and
there are parenthesis to be adjusted for as is the case on some MariaDB
versions.
.. change::
:tags: bug, environment
Warnings emitted by Alembic now include a default stack level of 2, and in
some cases it's set to 3, in order to help warnings indicate more closely
where they are originating from. Pull request courtesy Ash Berlin-Taylor.
.. change::
:tags: bug, py3k
:tickets: 563
Replaced the Python compatibility routines for ``getargspec()`` with a fully
vendored version based on ``getfullargspec()`` from Python 3.3.
Originally, Python was emitting deprecation warnings for this function in
Python 3.8 alphas. While this change was reverted, it was observed that
Python 3 implementations for ``getfullargspec()`` are an order of magnitude
slower as of the 3.4 series where it was rewritten against ``Signature``.
While Python plans to improve upon this situation, SQLAlchemy projects for
now are using a simple replacement to avoid any future issues.
.. changelog::
:version: 1.0.10
:released: April 28, 2019
.. change::
:tags: bug, commands
:tickets: 552
Fixed bug introduced in release 0.9.0 where the helptext for commands
inadvertently got expanded to include function docstrings from the
command.py module. The logic has been adjusted to only refer to the first
line(s) preceding the first line break within each docstring, as was the
original intent.
.. change::
:tags: bug, operations, mysql
:tickets: 551
Added an assertion in :meth:`.RevisionMap.get_revisions` and other methods
which ensures revision numbers are passed as strings or collections of
strings. Driver issues particularly on MySQL may inadvertently be passing
bytes here which leads to failures later on.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 553
Fixed bug when using the
:paramref:`.EnvironmentContext.configure.compare_server_default` flag set
to ``True`` where a server default that is introduced in the table metadata
on an ``Integer`` column, where there is no existing server default in the
database, would raise a ``TypeError``.
.. changelog::
:version: 1.0.9
:released: April 15, 2019
.. change::
:tags: bug, operations
:tickets: 548
Simplified the internal scheme used to generate the ``alembic.op`` namespace
to no longer attempt to generate full method signatures (e.g. rather than
generic ``*args, **kw``) as this was not working in most cases anyway, while
in rare circumstances it would in fact sporadically have access to the real
argument names and then fail when generating the function due to missing
symbols in the argument signature.
.. changelog::
:version: 1.0.8
:released: March 4, 2019
.. change::
:tags: bug, operations
:tickets: 528
Removed use of deprecated ``force`` parameter for SQLAlchemy quoting
functions as this parameter will be removed in a future release.
Pull request courtesy Parth Shandilya(ParthS007).
.. change::
:tags: bug, autogenerate, postgresql, py3k
:tickets: 541
Fixed issue where server default comparison on the PostgreSQL dialect would
fail for a blank string on Python 3.7 only, due to a change in regular
expression behavior in Python 3.7.
.. changelog::
:version: 1.0.7
:released: January 25, 2019
.. change::
:tags: bug, autogenerate
:tickets: 529
Fixed issue in new comment support where autogenerated Python code
for comments wasn't using ``repr()`` thus causing issues with
quoting. Pull request courtesy Damien Garaud.
.. changelog::
:version: 1.0.6
:released: January 13, 2019
.. change::
:tags: feature, operations
:tickets: 422
Added Table and Column level comments for supported backends.
New methods :meth:`.Operations.create_table_comment` and
:meth:`.Operations.drop_table_comment` are added. A new arguments
:paramref:`.Operations.alter_column.comment` and
:paramref:`.Operations.alter_column.existing_comment` are added to
:meth:`.Operations.alter_column`. Autogenerate support is also added
to ensure comment add/drops from tables and columns are generated as well
as that :meth:`.Operations.create_table`, :meth:`.Operations.add_column`
both include the comment field from the source :class:`.Table`
or :class:`.Column` object.
.. changelog::
:version: 1.0.5
:released: November 27, 2018
.. change::
:tags: bug, py3k
:tickets: 507
Resolved remaining Python 3 deprecation warnings, covering
the use of inspect.formatargspec() with a vendored version
copied from the Python standard library, importing
collections.abc above Python 3.3 when testing against abstract
base classes, fixed one occurrence of log.warn(), as well as a few
invalid escape sequences.
.. changelog::
:version: 1.0.4
:released: November 27, 2018
.. change::
:tags: change
Code hosting has been moved to GitHub, at
https://github.com/sqlalchemy/alembic. Additionally, the
main Alembic website documentation URL is now
https://alembic.sqlalchemy.org.
.. changelog::
:version: 1.0.3
:released: November 14, 2018
.. change::
:tags: bug, mssql
:tickets: 516
Fixed regression caused by :ticket:`513`, where the logic to consume
``mssql_include`` was not correctly interpreting the case where the flag
was not present, breaking the ``op.create_index`` directive for SQL Server
as a whole.
.. changelog::
:version: 1.0.2
:released: October 31, 2018
.. change::
:tags: bug, autogenerate
:tickets: 515
The ``system=True`` flag on :class:`.Column`, used primarily in conjunction
with the Postgresql "xmin" column, now renders within the autogenerate
render process, allowing the column to be excluded from DDL. Additionally,
adding a system=True column to a model will produce no autogenerate diff as
this column is implicitly present in the database.
.. change::
:tags: bug, mssql
:tickets: 513
Fixed issue where usage of the SQL Server ``mssql_include`` option within a
:meth:`.Operations.create_index` would raise a KeyError, as the additional
column(s) need to be added to the table object used by the construct
internally.
.. changelog::
:version: 1.0.1
:released: October 17, 2018
.. change::
:tags: bug, commands
:tickets: 497
Fixed an issue where revision descriptions were essentially
being formatted twice. Any revision description that contained
characters like %, writing output to stdout will fail because
the call to config.print_stdout attempted to format any
additional args passed to the function.
This fix now only applies string formatting if any args are provided
along with the output text.
.. change::
:tags: bug, autogenerate
:tickets: 512
Fixed issue where removed method ``union_update()`` was used when a
customized :class:`.MigrationScript` instance included entries in the
``.imports`` data member, raising an AttributeError.
.. changelog::
:version: 1.0.0
:released: July 13, 2018
:released: July 13, 2018
:released: July 13, 2018
.. change::
:tags: feature, general
:tickets: 491
For Alembic 1.0, Python 2.6 / 3.3 support is being dropped, allowing a
fixed setup.py to be built as well as universal wheels. Pull request
courtesy Hugo.
.. change::
:tags: feature, general
With the 1.0 release, Alembic's minimum SQLAlchemy support version
moves to 0.9.0, previously 0.7.9.
.. change::
:tags: bug, batch
:tickets: 502
Fixed issue in batch where dropping a primary key column, then adding it
back under the same name but without the primary_key flag, would not remove
it from the existing PrimaryKeyConstraint. If a new PrimaryKeyConstraint
is added, it is used as-is, as was the case before.
.. changelog::
:version: 0.9.10
:released: June 29, 2018
.. change::
:tags: bug, autogenerate
The "op.drop_constraint()" directive will now render using ``repr()`` for
the schema name, in the same way that "schema" renders for all the other op
directives. Pull request courtesy Denis Kataev.
.. change::
:tags: bug, autogenerate
:tickets: 494
Added basic capabilities for external dialects to support rendering of
"nested" types, like arrays, in a manner similar to that of the Postgresql
dialect.
.. change::
:tags: bug, autogenerate
Fixed issue where "autoincrement=True" would not render for a column that
specified it, since as of SQLAlchemy 1.1 this is no longer the default
value for "autoincrement". Note the behavior only takes effect against the
SQLAlchemy 1.1.0 and higher; for pre-1.1 SQLAlchemy, "autoincrement=True"
does not render as was the case before. Pull request courtesy Elad Almos.
.. changelog::
:version: 0.9.9
:released: March 22, 2018
.. change::
:tags: feature, commands
:tickets: 481
Added new flag ``--indicate-current`` to the ``alembic history`` command.
When listing versions, it will include the token "(current)" to indicate
the given version is a current head in the target database. Pull request
courtesy Kazutaka Mise.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 455
The fix for :ticket:`455` in version 0.9.6 involving MySQL server default
comparison was entirely non functional, as the test itself was also broken
and didn't reveal that it wasn't working. The regular expression to compare
server default values like CURRENT_TIMESTAMP to current_timestamp() is
repaired.
.. change::
:tags: bug, mysql, autogenerate
:tickets: 483
Fixed bug where MySQL server default comparisons were basically not working
at all due to incorrect regexp added in :ticket:`455`. Also accommodates
for MariaDB 10.2 quoting differences in reporting integer based server
defaults.
.. change::
:tags: bug, operations, mysql
:tickets: 487
Fixed bug in ``op.drop_constraint()`` for MySQL where
quoting rules would not be applied to the constraint name.
.. changelog::
:version: 0.9.8
:released: February 16, 2018
.. change::
:tags: bug, runtime
:tickets: 482
Fixed bug where the :meth:`.Script.as_revision_number` method
did not accommodate for the 'heads' identifier, which in turn
caused the :meth:`.EnvironmentContext.get_head_revisions`
and :meth:`.EnvironmentContext.get_revision_argument` methods
to be not usable when multiple heads were present.
The :meth:.`EnvironmentContext.get_head_revisions` method returns
a tuple in all cases as documented.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 478
Fixed bug where autogenerate of :class:`.ExcludeConstraint`
would render a raw quoted name for a Column that has case-sensitive
characters, which when invoked as an inline member of the Table
would produce a stack trace that the quoted name is not found.
An incoming Column object is now rendered as ``sa.column('name')``.
.. change::
:tags: bug, autogenerate
:tickets: 468
Fixed bug where the indexes would not be included in a
migration that was dropping the owning table. The fix
now will also emit DROP INDEX for the indexes ahead of time,
but more importantly will include CREATE INDEX in the
downgrade migration.
.. change::
:tags: bug, postgresql
:tickets: 480
Fixed the autogenerate of the module prefix
when rendering the text_type parameter of
postgresql.HSTORE, in much the same way that
we do for ARRAY's type and JSON's text_type.
.. change::
:tags: bug, mysql
:tickets: 479
Added support for DROP CONSTRAINT to the MySQL Alembic
dialect to support MariaDB 10.2 which now has real
CHECK constraints. Note this change does **not**
add autogenerate support, only support for op.drop_constraint()
to work.
.. changelog::
:version: 0.9.7
:released: January 16, 2018
.. change::
:tags: bug, autogenerate
:tickets: 472
Fixed regression caused by :ticket:`421` which would
cause case-sensitive quoting rules to interfere with the
comparison logic for index names, thus causing indexes to show
as added for indexes that have case-sensitive names. Works with
SQLAlchemy 0.9 and later series.
.. change::
:tags: bug, postgresql, autogenerate
:tickets: 461
Fixed bug where autogenerate would produce a DROP statement for the index
implicitly created by a Postgresql EXCLUDE constraint, rather than skipping
it as is the case for indexes implicitly generated by unique constraints.
Makes use of SQLAlchemy 1.0.x's improved "duplicates index" metadata and
requires at least SQLAlchemy version 1.0.x to function correctly.
.. changelog::
:version: 0.9.6
:released: October 13, 2017
.. change::
:tags: bug, commands
:tickets: 458
Fixed a few Python3.6 deprecation warnings by replacing ``StopIteration``
with ``return``, as well as using ``getfullargspec()`` instead of
``getargspec()`` under Python 3.
.. change::
:tags: bug, commands
:tickets: 441
An addition to :ticket:`441` fixed in 0.9.5, we forgot to also filter
for the ``+`` sign in migration names which also breaks due to the relative
migrations feature.
.. change::
:tags: bug, autogenerate
:tickets: 442
Fixed bug expanding upon the fix for
:ticket:`85` which adds the correct module import to the
"inner" type for an ``ARRAY`` type, the fix now accommodates for the
generic ``sqlalchemy.types.ARRAY`` type added in SQLAlchemy 1.1,
rendering the inner type correctly regardless of whether or not the
Postgresql dialect is present.
.. change::
:tags: bug, mysql
:tickets: 455
Fixed bug where server default comparison of CURRENT_TIMESTAMP would fail
on MariaDB 10.2 due to a change in how the function is
represented by the database during reflection.
.. change::
:tags: bug, autogenerate
Fixed bug where comparison of ``Numeric`` types would produce
a difference if the Python-side ``Numeric`` inadvertently specified
a non-None "scale" with a "precision" of None, even though this ``Numeric``
type will pass over the "scale" argument when rendering. Pull request
courtesy Ivan Mmelnychuk.
.. change::
:tags: feature, commands
:tickets: 447
The ``alembic history`` command will now make use of the revision
environment ``env.py`` unconditionally if the ``revision_environment``
configuration flag is set to True. Previously, the environment would
only be invoked if the history specification were against a database-stored
revision token.
.. change::
:tags: bug, batch
:tickets: 457
The name of the temporary table in batch mode is now generated
off of the original table name itself, to avoid conflicts for the
unusual case of multiple batch operations running against the same
database schema at the same time.
.. change::
:tags: bug, autogenerate
:tickets: 456
A :class:`.ForeignKeyConstraint` can now render correctly if the
``link_to_name`` flag is set, as it will not attempt to resolve the name
from a "key" in this case. Additionally, the constraint will render
as-is even if the remote column name isn't present on the referenced
remote table.
.. change::
:tags: bug, runtime, py3k
:tickets: 449
Reworked "sourceless" system to be fully capable of handling any
combination of: Python2/3x, pep3149 or not, PYTHONOPTIMIZE or not,
for locating and loading both env.py files as well as versioning files.
This includes: locating files inside of ``__pycache__`` as well as listing
out version files that might be only in ``versions/__pycache__``, deduplicating
version files that may be in ``versions/__pycache__`` and ``versions/``
at the same time, correctly looking for .pyc or .pyo files based on
if pep488 is present or not. The latest Python3x deprecation warnings
involving importlib are also corrected.
.. changelog::
:version: 0.9.5
:released: August 9, 2017
.. change::
:tags: bug, commands
:tickets: 441
A :class:`.CommandError` is raised if the "--rev-id" passed to the
:func:`.revision` command contains dashes or at-signs, as this interferes
with the command notation used to locate revisions.
.. change::
:tags: bug, postgresql
:tickets: 424
Added support for the dialect-specific keyword arguments
to :meth:`.Operations.drop_index`. This includes support for
``postgresql_concurrently`` and others.
.. change::
:tags: bug, commands
Fixed bug in timezone feature introduced in
:ticket:`425` when the creation
date in a revision file is calculated, to
accommodate for timezone names that contain
mixed-case characters in their name as opposed
to all uppercase. Pull request courtesy Nils
Philippsen.
.. changelog::
:version: 0.9.4
:released: July 31, 2017
.. change::
:tags: bug, runtime
Added an additional attribute to the new
:paramref:`.EnvironmentContext.configure.on_version_apply` API,
:attr:`.MigrationInfo.up_revision_ids`, to accommodate for the uncommon
case of the ``alembic stamp`` command being used to move from multiple
branches down to a common branchpoint; there will be multiple
"up" revisions in this one case.
.. changelog::
:version: 0.9.3
:released: July 6, 2017
.. change::
:tags: feature, runtime
Added a new callback hook
:paramref:`.EnvironmentContext.configure.on_version_apply`,
which allows user-defined code to be invoked each time an individual
upgrade, downgrade, or stamp operation proceeds against a database.
Pull request courtesy John Passaro.
.. change:: 433
:tags: bug, autogenerate
:tickets: 433
Fixed bug where autogen comparison of a :class:`.Variant` datatype
would not compare to the dialect level type for the "default"
implementation of the :class:`.Variant`, returning the type as changed
between database and table metadata.
.. change:: 431
:tags: bug, tests
:tickets: 431
Fixed unit tests to run correctly under the SQLAlchemy 1.0.x series
prior to version 1.0.10 where a particular bug involving Postgresql
exclude constraints was fixed.
.. changelog::
:version: 0.9.2
:released: May 18, 2017
.. change:: 429
:tags: bug, mssql
:tickets: 429
Repaired :meth:`.Operations.rename_table` for SQL Server when the
target table is in a remote schema, the schema name is omitted from
the "new name" argument.
.. change:: 425
:tags: feature, commands
:tickets: 425
Added a new configuration option ``timezone``, a string timezone name
that will be applied to the create date timestamp rendered
inside the revision file as made available to the ``file_template`` used
to generate the revision filename. Note this change adds the
``python-dateutil`` package as a dependency.
.. change:: 421
:tags: bug, autogenerate
:tickets: 421
The autogenerate compare scheme now takes into account the name truncation
rules applied by SQLAlchemy's DDL compiler to the names of the
:class:`.Index` object, when these names are dynamically truncated
due to a too-long identifier name. As the identifier truncation is
deterministic, applying the same rule to the metadata name allows
correct comparison to the database-derived name.
.. change:: 419
:tags: bug environment
:tickets: 419
A warning is emitted when an object that's not a
:class:`~sqlalchemy.engine.Connection` is passed to
:meth:`.EnvironmentContext.configure`. For the case of a
:class:`~sqlalchemy.engine.Engine` passed, the check for "in transaction"
introduced in version 0.9.0 has been relaxed to work in the case of an
attribute error, as some users appear to be passing an
:class:`~sqlalchemy.engine.Engine` and not a
:class:`~sqlalchemy.engine.Connection`.
.. changelog::
:version: 0.9.1
:released: March 1, 2017
.. change:: 417
:tags: bug, commands
:tickets: 417, 369
An adjustment to the bug fix for :ticket:`369` to accommodate for
env.py scripts that use an enclosing transaction distinct from the
one that the context provides, so that the check for "didn't commit
the transaction" doesn't trigger in this scenario.
.. changelog::
:version: 0.9.0
:released: February 28, 2017
.. change:: 38
:tags: feature, autogenerate
:tickets: 38
The :paramref:`.EnvironmentContext.configure.target_metadata` parameter
may now be optionally specified as a sequence of :class:`.MetaData`
objects instead of a single :class:`.MetaData` object. The
autogenerate process will process the sequence of :class:`.MetaData`
objects in order.
.. change:: 369
:tags: bug, commands
:tickets: 369
A :class:`.CommandError` is now raised when a migration file opens
a database transaction and does not close/commit/rollback, when
the backend database or environment options also specify transactional_ddl
is False. When transactional_ddl is not in use, Alembic doesn't
close any transaction so a transaction opened by a migration file
will cause the following migrations to fail to apply.
.. change:: 413
:tags: bug, autogenerate, mysql
:tickets: 413
The ``autoincrement=True`` flag is now rendered within the
:meth:`.Operations.alter_column` operation if the source column indicates
that this flag should be set to True. The behavior is sensitive to
the SQLAlchemy version in place, as the "auto" default option is new
in SQLAlchemy 1.1. When the source column indicates autoincrement
as True or "auto", the flag will render as True if the original column
contextually indicates that it should have "autoincrement" keywords,
and when the source column explicitly sets it to False, this is also
rendered. The behavior is intended to preserve the AUTO_INCREMENT flag
on MySQL as the column is fully recreated on this backend. Note that this
flag does **not** support alteration of a column's "autoincrement" status,
as this is not portable across backends.
.. change:: 411
:tags: bug, postgresql
:tickets: 411
Fixed bug where Postgresql JSON/JSONB types rendered on SQLAlchemy
1.1 would render the "astext_type" argument which defaults to
the ``Text()`` type without the module prefix, similarly to the
issue with ARRAY fixed in :ticket:`85`.
.. change:: 85
:tags: bug, postgresql
:tickets: 85
Fixed bug where Postgresql ARRAY type would not render the import prefix
for the inner type; additionally, user-defined renderers take place
for the inner type as well as the outer type. Pull request courtesy
Paul Brackin.
.. change:: process_revision_directives_command
:tags: feature, autogenerate
Added a keyword argument ``process_revision_directives`` to the
:func:`.command.revision` API call. This function acts in the
same role as the environment-level
:paramref:`.EnvironmentContext.configure.process_revision_directives`,
and allows API use of the
command to drop in an ad-hoc directive process function. This
function can be used among other things to place a complete
:class:`.MigrationScript` structure in place.
.. change:: 412
:tags: feature, postgresql
:tickets: 412
Added support for Postgresql EXCLUDE constraints, including the
operation directive :meth:`.Operations.create_exclude_constraints`
as well as autogenerate render support for the ``ExcludeConstraint``
object as present in a ``Table``. Autogenerate detection for an EXCLUDE
constraint added or removed to/from an existing table is **not**
implemented as the SQLAlchemy Postgresql dialect does not yet support
reflection of EXCLUDE constraints.
Additionally, unknown constraint types now warn when
encountered within an autogenerate action rather than raise.
.. change:: fk_schema_compare
:tags: bug, operations
Fixed bug in :func:`.ops.create_foreign_key` where the internal table
representation would not be created properly if the foreign key referred
to a table in a different schema of the same name. Pull request
courtesy Konstantin Lebedev.
.. changelog::
:version: 0.8.10
:released: January 17, 2017
.. change:: 406
:tags: bug, versioning
:tickets: 406
The alembic_version table, when initially created, now establishes a
primary key constraint on the "version_num" column, to suit database
engines that don't support tables without primary keys. This behavior
can be controlled using the parameter
:paramref:`.EnvironmentContext.configure.version_table_pk`. Note that
this change only applies to the initial creation of the alembic_version
table; it does not impact any existing alembic_version table already
present.
.. change:: 402
:tags: bug, batch
:tickets: 402
Fixed bug where doing ``batch_op.drop_constraint()`` against the
primary key constraint would fail to remove the "primary_key" flag
from the column, resulting in the constraint being recreated.
.. change:: update_uq_dedupe
:tags: bug, autogenerate, oracle
Adjusted the logic originally added for :ticket:`276` that detects MySQL
unique constraints which are actually unique indexes to be generalized
for any dialect that has this behavior, for SQLAlchemy version 1.0 and
greater. This is to allow for upcoming SQLAlchemy support for unique
constraint reflection for Oracle, which also has no dedicated concept of
"unique constraint" and instead establishes a unique index.
.. change:: 356
:tags: bug, versioning
:tickets: 356
Added a file ignore for Python files of the form ``.#<name>.py``,
which are generated by the Emacs editor. Pull request courtesy
Markus Mattes.
.. changelog::
:version: 0.8.9
:released: November 28, 2016
.. change:: 393
:tags: bug, autogenerate
:tickets: 393
Adjustment to the "please adjust!" comment in the script.py.mako
template so that the generated comment starts with a single pound
sign, appeasing flake8.
.. change::
:tags: bug, batch
:tickets: 391
Batch mode will not use CAST() to copy data if ``type_`` is given, however
the basic type affinity matches that of the existing type. This to
avoid SQLite's CAST of TIMESTAMP which results in truncation of the
data, in those cases where the user needs to add redundant ``type_`` for
other reasons.
.. change::
:tags: bug, autogenerate
:tickets: 393
Continued pep8 improvements by adding appropriate whitespace in
the base template for generated migrations. Pull request courtesy
Markus Mattes.
.. change::
:tags: bug, revisioning
Added an additional check when reading in revision files to detect
if the same file is being read twice; this can occur if the same directory
or a symlink equivalent is present more than once in version_locations.
A warning is now emitted and the file is skipped. Pull request courtesy
Jiri Kuncar.
.. change::
:tags: bug, autogenerate
:tickets: 395
Fixed bug where usage of a custom TypeDecorator which returns a
per-dialect type via :meth:`.TypeDecorator.load_dialect_impl` that differs
significantly from the default "impl" for the type decorator would fail
to compare correctly during autogenerate.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 392
Fixed bug in Postgresql "functional index skip" behavior where a
functional index that ended in ASC/DESC wouldn't be detected as something
we can't compare in autogenerate, leading to duplicate definitions
in autogenerated files.
.. change::
:tags: bug, versioning
Fixed bug where the "base" specifier, as in "base:head", could not
be used explicitly when ``--sql`` mode was present.
.. changelog::
:version: 0.8.8
:released: September 12, 2016
.. change::
:tags: autogenerate
The imports in the default script.py.mako are now at the top
so that flake8 editors don't complain by default. PR courtesy
Guilherme Mansur.
.. change::
:tags: feature, operations, postgresql
:tickets: 292
Added support for the USING clause to the ALTER COLUMN operation
for Postgresql. Support is via the
:paramref:`.op.alter_column.postgresql_using`
parameter. Pull request courtesy Frazer McLean.
.. change::
:tags: feature, autogenerate
Autogenerate with type comparison enabled will pick up on the timezone
setting changing between DateTime types. Pull request courtesy
David Szotten.
.. changelog::
:version: 0.8.7
:released: July 26, 2016
.. change::
:tags: bug, versioning
:tickets: 336
Fixed bug where upgrading to the head of a branch which is already
present would fail, only if that head were also the dependency
of a different branch that is also upgraded, as the revision system
would see this as trying to go in the wrong direction. The check
here has been refined to distinguish between same-branch revisions
out of order vs. movement along sibling branches.
.. change::
:tags: bug, versioning
:tickets: 379
Adjusted the version traversal on downgrade
such that we can downgrade to a version that is a dependency for
a version in a different branch, *without* needing to remove that
dependent version as well. Previously, the target version would be
seen as a "merge point" for it's normal up-revision as well as the
dependency. This integrates with the changes for :ticket:`377`
and :ticket:`378` to improve treatment of branches with dependencies
overall.
.. change::
:tags: bug, versioning
:tickets: 377
Fixed bug where a downgrade to a version that is also a dependency
to a different branch would fail, as the system attempted to treat
this as an "unmerge" of a merge point, when in fact it doesn't have
the other side of the merge point available for update.
.. change::
:tags: bug, versioning
:tickets: 378
Fixed bug where the "alembic current" command wouldn't show a revision
as a current head if it were also a dependency of a version in a
different branch that's also applied. Extra logic is added to
extract "implied" versions of different branches from the top-level
versions listed in the alembic_version table.
.. change::
:tags: bug, versioning
Fixed bug where a repr() or str() of a Script object would fail
if the script had multiple dependencies.
.. change::
:tags: bug, autogenerate
Fixed bug in autogen where if the DB connection sends the default
schema as "None", this "None" would be removed from the list of
schemas to check if include_schemas were set. This could possibly
impact using include_schemas with SQLite.
.. change::
:tags: bug, batch
Small adjustment made to the batch handling for reflected CHECK
constraints to accommodate for SQLAlchemy 1.1 now reflecting these.
Batch mode still does not support CHECK constraints from the reflected
table as these can't be easily differentiated from the ones created
by types such as Boolean.
.. changelog::
:version: 0.8.6
:released: April 14, 2016
.. change::
:tags: bug, commands
:tickets: 367
Errors which occur within the Mako render step are now intercepted
and raised as CommandErrors like other failure cases; the Mako
exception itself is written using template-line formatting to
a temporary file which is named in the exception message.
.. change::
:tags: bug, postgresql
:tickets: 365
Added a fix to Postgresql server default comparison which first checks
if the text of the default is identical to the original, before attempting
to actually run the default. This accommodates for default-generation
functions that generate a new value each time such as a uuid function.
.. change::
:tags: bug, batch
:tickets: 361
Fixed bug introduced by the fix for :ticket:`338` in version 0.8.4
where a server default could no longer be dropped in batch mode.
Pull request courtesy Martin Domke.
.. change::
:tags: bug, batch, mssql
Fixed bug where SQL Server arguments for drop_column() would not
be propagated when running under a batch block. Pull request
courtesy Michal Petrucha.
.. changelog::
:version: 0.8.5
:released: March 9, 2016
.. change::
:tags: bug, autogenerate
:tickets: 335
Fixed bug where the columns rendered in a ``PrimaryKeyConstraint``
in autogenerate would inappropriately render the "key" of the
column, not the name. Pull request courtesy Jesse Dhillon.
.. change::
:tags: bug, batch
:tickets: 354
Repaired batch migration support for "schema" types which generate
constraints, in particular the ``Boolean`` datatype which generates
a CHECK constraint. Previously, an alter column operation with this
type would fail to correctly accommodate for the CHECK constraint
on change both from and to this type. In the former case the operation
would fail entirely, in the latter, the CHECK constraint would
not get generated. Both of these issues are repaired.
.. change::
:tags: bug, mysql
:tickets: 355
Changing a schema type such as ``Boolean`` to a non-schema type would
emit a drop constraint operation which emits ``NotImplementedError`` for
the MySQL dialect. This drop constraint operation is now skipped when
the constraint originates from a schema type.
.. changelog::
:version: 0.8.4
:released: December 15, 2015
.. change::
:tags: feature, versioning
A major improvement to the hash id generation function, which for some
reason used an awkward arithmetic formula against uuid4() that produced
values that tended to start with the digits 1-4. Replaced with a
simple substring approach which provides an even distribution. Pull
request courtesy Antti Haapala.
.. change::
:tags: feature, autogenerate
Added an autogenerate renderer for the :class:`.ExecuteSQLOp` operation
object; only renders if given a plain SQL string, otherwise raises
NotImplementedError. Can be of help with custom autogenerate
sequences that includes straight SQL execution. Pull request courtesy
Jacob Magnusson.
.. change::
:tags: bug, batch
:tickets: 345
Batch mode generates a FOREIGN KEY constraint that is self-referential
using the ultimate table name, rather than ``_alembic_batch_temp``.
When the table is renamed from ``_alembic_batch_temp`` back to the
original name, the FK now points to the right name. This
will **not** work if referential integrity is being enforced (eg. SQLite
"PRAGMA FOREIGN_KEYS=ON") since the original table is dropped and
the new table then renamed to that name, however this is now consistent
with how foreign key constraints on **other** tables already operate
with batch mode; these don't support batch mode if referential integrity
is enabled in any case.
.. change::
:tags: bug, autogenerate
:tickets: 341
Added a type-level comparator that distinguishes :class:`.Integer`,
:class:`.BigInteger`, and :class:`.SmallInteger` types and
dialect-specific types; these all have "Integer" affinity so previously
all compared as the same.
.. change::
:tags: bug, batch
:tickets: 338
Fixed bug where the ``server_default`` parameter of ``alter_column()``
would not function correctly in batch mode.
.. change::
:tags: bug, autogenerate
:tickets: 337
Adjusted the rendering for index expressions such that a :class:`.Column`
object present in the source :class:`.Index` will not be rendered
as table-qualified; e.g. the column name will be rendered alone.
Table-qualified names here were failing on systems such as Postgresql.
.. changelog::
:version: 0.8.3
:released: October 16, 2015
.. change::
:tags: bug, autogenerate
:tickets: 332
Fixed an 0.8 regression whereby the "imports" dictionary member of
the autogen context was removed; this collection is documented in the
"render custom type" documentation as a place to add new imports.
The member is now known as
:attr:`.AutogenContext.imports` and the documentation is repaired.
.. change::
:tags: bug, batch
:tickets: 333
Fixed bug in batch mode where a table that had pre-existing indexes
would create the same index on the new table with the same name,
which on SQLite produces a naming conflict as index names are in a
global namespace on that backend. Batch mode now defers the production
of both existing and new indexes until after the entire table transfer
operation is complete, which also means those indexes no longer take
effect during the INSERT from SELECT section as well; the indexes
are applied in a single step afterwards.
.. change::
:tags: bug, tests
Added "pytest-xdist" as a tox dependency, so that the -n flag
in the test command works if this is not already installed.
Pull request courtesy Julien Danjou.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 324
Fixed issue in PG server default comparison where model-side defaults
configured with Python unicode literals would leak the "u" character
from a ``repr()`` into the SQL used for comparison, creating an invalid
SQL expression, as the server-side comparison feature in PG currently
repurposes the autogenerate Python rendering feature to get a quoted
version of a plain string default.
.. changelog::
:version: 0.8.2
:released: August 25, 2015
.. change::
:tags: bug, autogenerate
:tickets: 321
Added workaround in new foreign key option detection feature for
MySQL's consideration of the "RESTRICT" option being the default,
for which no value is reported from the database; the MySQL impl now
corrects for when the model reports RESTRICT but the database reports
nothing. A similar rule is in the default FK comparison to accommodate
for the default "NO ACTION" setting being present in the model but not
necessarily reported by the database, or vice versa.
.. changelog::
:version: 0.8.1
:released: August 22, 2015
.. change::
:tags: feature, autogenerate
A custom :paramref:`.EnvironmentContext.configure.process_revision_directives`
hook can now generate op directives within the :class:`.UpgradeOps`
and :class:`.DowngradeOps` containers that will be generated as Python
code even when the ``--autogenerate`` flag is False; provided that
``revision_environment=True``, the full render operation will be run
even in "offline" mode.
.. change::
:tags: bug, autogenerate
Repaired the render operation for the :class:`.ops.AlterColumnOp` object
to succeed when the "existing_type" field was not present.
.. change::
:tags: bug, autogenerate
:tickets: 318
Fixed a regression 0.8 whereby the "multidb" environment template
failed to produce independent migration script segments for the
output template. This was due to the reorganization of the script
rendering system for 0.8. To accommodate this change, the
:class:`.MigrationScript` structure will in the case of multiple
calls to :meth:`.MigrationContext.run_migrations` produce lists
for the :attr:`.MigrationScript.upgrade_ops` and
:attr:`.MigrationScript.downgrade_ops` attributes; each :class:`.UpgradeOps`
and :class:`.DowngradeOps` instance keeps track of its own
``upgrade_token`` and ``downgrade_token``, and each are rendered
individually.
.. seealso::
:ref:`autogen_customizing_multiengine_revision` - additional detail
on the workings of the
:paramref:`.EnvironmentContext.configure.process_revision_directives`
parameter when multiple calls to :meth:`.MigrationContext.run_migrations`
are made.
.. change::
:tags: feature, autogenerate
:tickets: 317
Implemented support for autogenerate detection of changes in the
``ondelete``, ``onupdate``, ``initially`` and ``deferrable``
attributes of :class:`.ForeignKeyConstraint` objects on
SQLAlchemy backends that support these on reflection
(as of SQLAlchemy 1.0.8 currently Postgresql for all four,
MySQL for ``ondelete`` and ``onupdate`` only). A constraint object
that modifies these values will be reported as a "diff" and come out
as a drop/create of the constraint with the modified values.
The fields are ignored for backends which don't reflect these
attributes (as of SQLA 1.0.8 this includes SQLite, Oracle, SQL Server,
others).
.. changelog::
:version: 0.8.0
:released: August 12, 2015
.. change::
:tags: bug, batch
:tickets: 315
Fixed bug in batch mode where the ``batch_op.create_foreign_key()``
directive would be incorrectly rendered with the source table and
schema names in the argument list.
.. change::
:tags: feature, commands
Added new command ``alembic edit``. This command takes the same
arguments as ``alembic show``, however runs the target script
file within $EDITOR. Makes use of the ``python-editor`` library
in order to facilitate the handling of $EDITOR with reasonable
default behaviors across platforms. Pull request courtesy
Michel Albert.
.. change::
:tags: feature, commands
:tickets: 311
Added new multiple-capable argument ``--depends-on`` to the
``alembic revision`` command, allowing ``depends_on`` to be
established at the command line level rather than having to edit
the file after the fact. ``depends_on`` identifiers may also be
specified as branch names at the command line or directly within
the migration file. The values may be specified as partial
revision numbers from the command line which will be resolved to
full revision numbers in the output file.
.. change::
:tags: change, operations
A range of positional argument names have been changed to be
clearer and more consistent across methods within the
:class:`.Operations` namespace. The most prevalent form of name change
is that the descriptive names ``constraint_name`` and ``table_name``
are now used where previously the name ``name`` would be used.
This is in support of the newly modularized and extensible system of
operation objects in :mod:`alembic.operations.ops`.
An argument translation layer is in place
across the ``alembic.op`` namespace that will ensure that named
argument calling styles that use the old names will continue to
function by transparently translating to the new names,
also emitting a warning. This, along with the fact that these
arguments are positional in any case and aren't normally
passed with an explicit name, should ensure that the
overwhelming majority of applications should be unaffected by this
change. The *only* applications that are impacted are those that:
1. use the :class:`.Operations` object directly in some way, rather
than calling upon the ``alembic.op`` namespace, and
2. invoke the methods on :class:`.Operations` using named keyword
arguments for positional arguments like ``table_name``,
``constraint_name``, etc., which commonly were named ``name``
as of 0.7.6.
3. any application that is using named keyword arguments in place
of positional argument for the recently added
:class:`.BatchOperations` object may also be affected.
The naming changes are documented as "versionchanged" for 0.8.0:
* :meth:`.BatchOperations.create_check_constraint`
* :meth:`.BatchOperations.create_foreign_key`
* :meth:`.BatchOperations.create_index`
* :meth:`.BatchOperations.create_unique_constraint`
* :meth:`.BatchOperations.drop_constraint`
* :meth:`.BatchOperations.drop_index`
* :meth:`.Operations.create_check_constraint`
* :meth:`.Operations.create_foreign_key`
* :meth:`.Operations.create_primary_key`
* :meth:`.Operations.create_index`
* :meth:`.Operations.create_table`
* :meth:`.Operations.create_unique_constraint`
* :meth:`.Operations.drop_constraint`
* :meth:`.Operations.drop_index`
* :meth:`.Operations.drop_table`
.. change::
:tags: feature, tests
The default test runner via "python setup.py test" is now py.test.
nose still works via run_tests.py.
.. change::
:tags: feature, operations
:tickets: 302
The internal system for Alembic operations has been reworked to now
build upon an extensible system of operation objects. New operations
can be added to the ``op.`` namespace, including that they are
available in custom autogenerate schemes.
.. seealso::
:ref:`operation_plugins`
.. change::
:tags: feature, autogenerate
:tickets: 301, 306
The internal system for autogenerate been reworked to build upon
the extensible system of operation objects present in
:ticket:`302`. As part of this change, autogenerate now produces
a full object graph representing a list of migration scripts to
be written as well as operation objects that will render all the
Python code within them; a new hook
:paramref:`.EnvironmentContext.configure.process_revision_directives`
allows end-user code to fully customize what autogenerate will do,
including not just full manipulation of the Python steps to take
but also what file or files will be written and where. Additionally,
autogenerate is now extensible as far as database objects compared
and rendered into scripts; any new operation directive can also be
registered into a series of hooks that allow custom database/model
comparison functions to run as well as to render new operation
directives into autogenerate scripts.
.. seealso::
:ref:`alembic.autogenerate.toplevel`
.. change::
:tags: bug, versioning
:tickets: 314
Fixed bug where in the erroneous case that alembic_version contains
duplicate revisions, some commands would fail to process the
version history correctly and end up with a KeyError. The fix
allows the versioning logic to proceed, however a clear error is
emitted later when attempting to update the alembic_version table.
.. changelog::
:version: 0.7.7
:released: July 22, 2015
.. change::
:tags: bug, versioning
:tickets: 310
Fixed critical issue where a complex series of branches/merges would
bog down the iteration algorithm working over redundant nodes for
millions of cycles. An internal adjustment has been
made so that duplicate nodes are skipped within this iteration.
.. change::
:tags: feature, batch
:tickets: 305
Implemented support for :meth:`.BatchOperations.create_primary_key`
and :meth:`.BatchOperations.create_check_constraint`. Additionally,
table keyword arguments are copied from the original reflected table,
such as the "mysql_engine" keyword argument.
.. change::
:tags: bug, environment
:tickets: 300
The :meth:`.MigrationContext.stamp` method, added as part of the
versioning refactor in 0.7 as a more granular version of
:func:`.command.stamp`, now includes the "create the alembic_version
table if not present" step in the same way as the command version,
which was previously omitted.
.. change::
:tags: bug, autogenerate
:tickets: 298
Fixed bug where foreign key options including "onupdate",
"ondelete" would not render within the ``op.create_foreign_key()``
directive, even though they render within a full
``ForeignKeyConstraint`` directive.
.. change::
:tags: bug, tests
Repaired warnings that occur when running unit tests against
SQLAlchemy 1.0.5 or greater involving the "legacy_schema_aliasing"
flag.
.. changelog::
:version: 0.7.6
:released: May 5, 2015
.. change::
:tags: feature, versioning
:tickets: 297
Fixed bug where the case of multiple mergepoints that all
have the identical set of ancestor revisions would fail to be
upgradable, producing an assertion failure. Merge points were
previously assumed to always require at least an UPDATE in
alembic_revision from one of the previous revs to the new one,
however in this case, if one of the mergepoints has already
been reached, the remaining mergepoints have no row to UPDATE therefore
they must do an INSERT of their target version.
.. change::
:tags: feature, autogenerate
:tickets: 296
Added support for type comparison functions to be not just per
environment, but also present on the custom types themselves, by
supplying a method ``compare_against_backend``.
Added a new documentation section :ref:`compare_types` describing
type comparison fully.
.. change::
:tags: feature, operations
:tickets: 255
Added a new option
:paramref:`.EnvironmentContext.configure.literal_binds`, which
will pass the ``literal_binds`` flag into the compilation of SQL
constructs when using "offline" mode. This has the effect that
SQL objects like inserts, updates, deletes as well as textual
statements sent using ``text()`` will be compiled such that the dialect
will attempt to render literal values "inline" automatically.
Only a subset of types is typically supported; the
:meth:`.Operations.inline_literal` construct remains as the construct
used to force a specific literal representation of a value.
The :paramref:`.EnvironmentContext.configure.literal_binds` flag
is added to the "offline" section of the ``env.py`` files generated
in new environments.
.. change::
:tags: bug, batch
:tickets: 289
Fully implemented the
:paramref:`~.Operations.batch_alter_table.copy_from` parameter for
batch mode, which previously was not functioning. This allows
"batch mode" to be usable in conjunction with ``--sql``.
.. change::
:tags: bug, batch
:tickets: 287
Repaired support for the :meth:`.BatchOperations.create_index`
directive, which was mis-named internally such that the operation
within a batch context could not proceed. The create index
operation will proceed as part of a larger "batch table recreate"
operation only if
:paramref:`~.Operations.batch_alter_table.recreate` is set to
"always", or if the batch operation includes other instructions that
require a table recreate.
.. changelog::
:version: 0.7.5
:released: March 19, 2015
.. change::
:tags: bug, autogenerate
:tickets: 266
The ``--autogenerate`` option is not valid when used in conjunction
with "offline" mode, e.g. ``--sql``. This now raises a ``CommandError``,
rather than failing more deeply later on. Pull request courtesy
Johannes Erdfelt.
.. change::
:tags: bug, operations, mssql
:tickets: 284
Fixed bug where the mssql DROP COLUMN directive failed to include
modifiers such as "schema" when emitting the DDL.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 282
Postgresql "functional" indexes are necessarily skipped from the
autogenerate process, as the SQLAlchemy backend currently does not
support reflection of these structures. A warning is emitted
both from the SQLAlchemy backend as well as from the Alembic
backend for Postgresql when such an index is detected.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 276
Fixed bug where MySQL backend would report dropped unique indexes
and/or constraints as both at the same time. This is because
MySQL doesn't actually have a "unique constraint" construct that
reports differently than a "unique index", so it is present in both
lists. The net effect though is that the MySQL backend will report
a dropped unique index/constraint as an index in cases where the object
was first created as a unique constraint, if no other information
is available to make the decision. This differs from other backends
like Postgresql which can report on unique constraints and
unique indexes separately.
.. change::
:tags: bug, commands
:tickets: 269
Fixed bug where using a partial revision identifier as the
"starting revision" in ``--sql`` mode in a downgrade operation
would fail to resolve properly.
As a side effect of this change, the
:meth:`.EnvironmentContext.get_starting_revision_argument`
method will return the "starting" revision in its originally-
given "partial" form in all cases, whereas previously when
running within the :meth:`.command.stamp` command, it would have
been resolved to a full number before passing it to the
:class:`.EnvironmentContext`. The resolution of this value to
a real revision number has basically been moved to a more fundamental
level within the offline migration process.
.. change::
:tags: feature, commands
Added a new feature :attr:`.Config.attributes`, to help with the use
case of sharing state such as engines and connections on the outside
with a series of Alembic API calls; also added a new cookbook section
to describe this simple but pretty important use case.
.. seealso::
:ref:`connection_sharing`
.. change::
:tags: feature, environment
The format of the default ``env.py`` script has been refined a bit;
it now uses context managers not only for the scope of the transaction,
but also for connectivity from the starting engine. The engine is also
now called a "connectable" in support of the use case of an external
connection being passed in.
.. change::
:tags: feature, versioning
:tickets: 267
Added support for "alembic stamp" to work when given "heads" as an
argument, when multiple heads are present.
.. changelog::
:version: 0.7.4
:released: January 12, 2015
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 241
Repaired issue where a server default specified without ``text()``
that represented a numeric or floating point (e.g. with decimal places)
value would fail in the Postgresql-specific check for "compare server
default"; as PG accepts the value with quotes in the table specification,
it's still valid. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug, autogenerate
:tickets: 259
The rendering of a :class:`~sqlalchemy.schema.ForeignKeyConstraint`
will now ensure that the names of the source and target columns are
the database-side name of each column, and not the value of the
``.key`` attribute as may be set only on the Python side.
This is because Alembic generates the DDL for constraints
as standalone objects without the need to actually refer to an in-Python
:class:`~sqlalchemy.schema.Table` object, so there's no step that
would resolve these Python-only key names to database column names.
.. change::
:tags: bug, autogenerate
:tickets: 260
Fixed bug in foreign key autogenerate where if the in-Python table
used custom column keys (e.g. using the ``key='foo'`` kwarg to
``Column``), the comparison of existing foreign keys to those specified
in the metadata would fail, as the reflected table would not have
these keys available which to match up. Foreign key comparison for
autogenerate now ensures it's looking at the database-side names
of the columns in all cases; this matches the same functionality
within unique constraints and indexes.
.. change::
:tags: bug, autogenerate
:tickets: 261
Fixed issue in autogenerate type rendering where types that belong
to modules that have the name "sqlalchemy" in them would be mistaken
as being part of the ``sqlalchemy.`` namespace. Pull req courtesy
Bartosz Burclaf.
.. changelog::
:version: 0.7.3
:released: December 30, 2014
.. change::
:tags: bug, versioning
:tickets: 258
Fixed regression in new versioning system where upgrade / history
operation would fail on AttributeError if no version files were
present at all.
.. changelog::
:version: 0.7.2
:released: December 18, 2014
.. change::
:tags: bug, sqlite, autogenerate
Adjusted the SQLite backend regarding autogen of unique constraints
to work fully with the current SQLAlchemy 1.0, which now will report
on UNIQUE constraints that have no name.
.. change::
:tags: bug, batch
:tickets: 254
Fixed bug in batch where if the target table contained multiple
foreign keys to the same target table, the batch mechanics would
fail with a "table already exists" error. Thanks for the help
on this from Lucas Kahlert.
.. change::
:tags: bug, mysql
:tickets: 251
Fixed an issue where the MySQL routine to skip foreign-key-implicit
indexes would also catch unnamed unique indexes, as they would be
named after the column and look like the FK indexes. Pull request
courtesy Johannes Erdfelt.
.. change::
:tags: bug, mssql, oracle
:tickets: 253
Repaired a regression in both the MSSQL and Oracle dialects whereby
the overridden ``_exec()`` method failed to return a value, as is
needed now in the 0.7 series.
.. changelog::
:version: 0.7.1
:released: December 3, 2014
.. change::
:tags: bug, batch
The ``render_as_batch`` flag was inadvertently hardcoded to ``True``,
so all autogenerates were spitting out batch mode...this has been
fixed so that batch mode again is only when selected in env.py.
.. change::
:tags: feature, autogenerate
:tickets: 178
Support for autogenerate of FOREIGN KEY constraints has been added.
These are delivered within the autogenerate process in the same
manner as UNIQUE constraints, including ``include_object`` support.
Big thanks to Ann Kamyshnikova for doing the heavy lifting here.
.. change::
:tags: feature, batch
Added :paramref:`~.Operations.batch_alter_table.naming_convention`
argument to :meth:`.Operations.batch_alter_table`, as this is necessary
in order to drop foreign key constraints; these are often unnamed
on the target database, and in the case that they are named, SQLAlchemy
is as of the 0.9 series not including these names yet.
.. seealso::
:ref:`dropping_sqlite_foreign_keys`
.. change::
:tags: bug, batch
Fixed bug where the "source_schema" argument was not correctly passed
when calling :meth:`.BatchOperations.create_foreign_key`. Pull
request courtesy Malte Marquarding.
.. change::
:tags: bug, batch
:tickets: 249
Repaired the inspection, copying and rendering of CHECK constraints
and so-called "schema" types such as Boolean, Enum within the batch
copy system; the CHECK constraint will not be "doubled" when the table is
copied, and additionally the inspection of the CHECK constraint for
its member columns will no longer fail with an attribute error.
.. change::
:tags: feature, batch
Added two new arguments
:paramref:`.Operations.batch_alter_table.reflect_args`
and :paramref:`.Operations.batch_alter_table.reflect_kwargs`, so that
arguments may be passed directly to suit the
:class:`~.sqlalchemy.schema.Table`
object that will be reflected.
.. seealso::
:ref:`batch_controlling_table_reflection`
.. changelog::
:version: 0.7.0
:released: November 24, 2014
.. change::
:tags: feature, versioning
:tickets: 167
The "multiple heads / branches" feature has now landed. This is
by far the most significant change Alembic has seen since its inception;
while the workflow of most commands hasn't changed, and the format
of version files and the ``alembic_version`` table are unchanged as well,
a new suite of features opens up in the case where multiple version
files refer to the same parent, or to the "base". Merging of
branches, operating across distinct named heads, and multiple
independent bases are now all supported. The feature incurs radical
changes to the internals of versioning and traversal, and should be
treated as "beta mode" for the next several subsequent releases
within 0.7.
.. seealso::
:ref:`branches`
.. change::
:tags: feature, versioning
:tickets: 124
In conjunction with support for multiple independent bases, the
specific version directories are now also configurable to include
multiple, user-defined directories. When multiple directories exist,
the creation of a revision file with no down revision requires
that the starting directory is indicated; the creation of subsequent
revisions along that lineage will then automatically use that
directory for new files.
.. seealso::
:ref:`multiple_version_directories`
.. change::
:tags: feature, operations, sqlite
:tickets: 21
Added "move and copy" workflow, where a table to be altered is copied to
a new one with the new structure and the old one dropped, is now
implemented for SQLite as well as all database backends in general
using the new :meth:`.Operations.batch_alter_table` system. This
directive provides a table-specific operations context which gathers
column- and constraint-level mutations specific to that table, and
at the end of the context creates a new table combining the structure
of the old one with the given changes, copies data from old table to new,
and finally drops the old table,
renaming the new one to the existing name. This is required for
fully featured SQLite migrations, as SQLite has very little support for the
traditional ALTER directive. The batch directive
is intended to produce code that is still compatible with other databases,
in that the "move and copy" process only occurs for SQLite by default,
while still providing some level of sanity to SQLite's
requirement by allowing multiple table mutation operations to
proceed within one "move and copy" as well as providing explicit
control over when this operation actually occurs. The "move and copy"
feature may be optionally applied to other backends as well, however
dealing with referential integrity constraints from other tables must
still be handled explicitly.
.. seealso::
:ref:`batch_migrations`
.. change::
:tags: feature, commands
Relative revision identifiers as used with ``alembic upgrade``,
``alembic downgrade`` and ``alembic history`` can be combined with
specific revisions as well, e.g. ``alembic upgrade ae10+3``, to produce
a migration target relative to the given exact version.
.. change::
:tags: bug, commands
:tickets: 248
The ``alembic revision`` command accepts the ``--sql`` option to
suit some very obscure use case where the ``revision_environment``
flag is set up, so that ``env.py`` is run when ``alembic revision``
is run even though autogenerate isn't specified. As this flag is
otherwise confusing, error messages are now raised if
``alembic revision`` is invoked with both ``--sql`` and
``--autogenerate`` or with ``--sql`` without
``revision_environment`` being set.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 247
Added a rule for Postgresql to not render a "drop unique" and "drop index"
given the same name; for now it is assumed that the "index" is the
implicit one PostgreSQL generates. Future integration with
new SQLAlchemy 1.0 features will improve this to be more
resilient.
.. change::
:tags: bug, autogenerate
:tickets: 247
A change in the ordering when columns and constraints are dropped;
autogenerate will now place the "drop constraint" calls *before*
the "drop column" calls, so that columns involved in those constraints
still exist when the constraint is dropped.
.. change::
:tags: feature, commands
New commands added: ``alembic show``, ``alembic heads`` and
``alembic merge``. Also, a new option ``--verbose`` has been
added to several informational commands, such as ``alembic history``,
``alembic current``, ``alembic branches``, and ``alembic heads``.
``alembic revision`` also contains several new options used
within the new branch management system. The output of commands has
been altered in many cases to support new fields and attributes;
the ``history`` command in particular now returns it's "verbose" output
only if ``--verbose`` is sent; without this flag it reverts to it's
older behavior of short line items (which was never changed in the docs).
.. change::
:tags: changed, commands
The ``--head_only`` option to the ``alembic current`` command is
deprecated; the ``current`` command now lists just the version numbers
alone by default; use ``--verbose`` to get at additional output.
.. change::
:tags: feature, config
Added new argument :paramref:`.Config.config_args`, allows a dictionary
of replacement variables to be passed which will serve as substitution
values when an API-produced :class:`.Config` consumes the ``.ini``
file. Pull request courtesy Noufal Ibrahim.
.. change::
:tags: bug, oracle
:tickets: 245
The Oracle dialect sets "transactional DDL" to False by default,
as Oracle does not support transactional DDL.
.. change::
:tags: bug, autogenerate
:tickets: 243
Fixed a variety of issues surrounding rendering of Python code that
contains unicode literals. The first is that the "quoted_name" construct
that SQLAlchemy uses to represent table and column names as well
as schema names does not ``repr()`` correctly on Py2K when the value
contains unicode characters; therefore an explicit stringification is
added to these. Additionally, SQL expressions such as server defaults
were not being generated in a unicode-safe fashion leading to decode
errors if server defaults contained non-ascii characters.
.. change::
:tags: bug, operations
:tickets: 174
The :meth:`.Operations.add_column` directive will now additionally emit
the appropriate ``CREATE INDEX`` statement if the
:class:`~sqlalchemy.schema.Column` object specifies ``index=True``.
Pull request courtesy David Szotten.
.. change::
:tags: feature, operations
:tickets: 205
The :class:`~sqlalchemy.schema.Table` object is now returned when
the :meth:`.Operations.create_table` method is used. This ``Table``
is suitable for use in subsequent SQL operations, in particular
the :meth:`.Operations.bulk_insert` operation.
.. change::
:tags: feature, autogenerate
:tickets: 203
Indexes and unique constraints are now included in the
:paramref:`.EnvironmentContext.configure.include_object` hook.
Indexes are sent with type ``"index"`` and unique constraints with
type ``"unique_constraint"``.
.. change::
:tags: bug, autogenerate
:tickets: 219
Bound parameters are now resolved as "literal" values within the
SQL expression inside of a CheckConstraint(), when rendering the SQL
as a text string; supported for SQLAlchemy 0.8.0 and forward.
.. change::
:tags: bug, autogenerate
:tickets: 199
Added a workaround for SQLAlchemy issue #3023 (fixed in 0.9.5) where
a column that's part of an explicit PrimaryKeyConstraint would not
have its "nullable" flag set to False, thus producing a false
autogenerate. Also added a related correction to MySQL which will
correct for MySQL's implicit server default of '0' when a NULL integer
column is turned into a primary key column.
.. change::
:tags: bug, autogenerate, mysql
:tickets: 240
Repaired issue related to the fix for #208 and others; a composite
foreign key reported by MySQL would cause a KeyError as Alembic
attempted to remove MySQL's implicitly generated indexes from the
autogenerate list.
.. change::
:tags: bug, autogenerate
:tickets: 28
If the "alembic_version" table is present in the target metadata,
autogenerate will skip this also. Pull request courtesy
Dj Gilcrease.
.. change::
:tags: bug, autogenerate
:tickets: 77
The :paramref:`.EnvironmentContext.configure.version_table`
and :paramref:`.EnvironmentContext.configure.version_table_schema`
arguments are now honored during the autogenerate process, such that
these names will be used as the "skip" names on both the database
reflection and target metadata sides.
.. change::
:tags: changed, autogenerate
:tickets: 229
The default value of the
:paramref:`.EnvironmentContext.configure.user_module_prefix`
parameter is **no longer the same as the SQLAlchemy prefix**.
When omitted, user-defined types will now use the ``__module__``
attribute of the type class itself when rendering in an
autogenerated module.
.. change::
:tags: bug, templates
:tickets: 234
Revision files are now written out using the ``'wb'`` modifier to
``open()``, since Mako reads the templates with ``'rb'``, thus preventing
CRs from being doubled up as has been observed on windows. The encoding
of the output now defaults to 'utf-8', which can be configured using
a newly added config file parameter ``output_encoding``.
.. change::
:tags: bug, operations
:tickets: 230
Added support for use of the :class:`~sqlalchemy.sql.elements.quoted_name`
construct when using the ``schema`` argument within operations. This
allows a name containing a dot to be fully quoted, as well as to
provide configurable quoting on a per-name basis.
.. change::
:tags: bug, autogenerate, postgresql
:tickets: 73
Added a routine by which the Postgresql Alembic dialect inspects
the server default of INTEGER/BIGINT columns as they are reflected
during autogenerate for the pattern ``nextval(<name>...)`` containing
a potential sequence name, then queries ``pg_catalog`` to see if this
sequence is "owned" by the column being reflected; if so, it assumes
this is a SERIAL or BIGSERIAL column and the server default is
omitted from the column reflection as well as any kind of
server_default comparison or rendering, along with an INFO message
in the logs indicating this has taken place. This allows SERIAL/BIGSERIAL
columns to keep the SEQUENCE from being unnecessarily present within
the autogenerate operation.
.. change::
:tags: bug, autogenerate
:tickets: 197, 64, 196
The system by which autogenerate renders expressions within
a :class:`~sqlalchemy.schema.Index`, the ``server_default``
of :class:`~sqlalchemy.schema.Column`, and the
``existing_server_default`` of
:meth:`.Operations.alter_column` has been overhauled to anticipate
arbitrary SQLAlchemy SQL constructs, such as ``func.somefunction()``,
``cast()``, ``desc()``, and others. The system does not, as might
be preferred, render the full-blown Python expression as originally
created within the application's source code, as this would be exceedingly
complex and difficult. Instead, it renders the SQL expression against
the target backend that's subject to the autogenerate, and then
renders that SQL inside of a :func:`~sqlalchemy.sql.expression.text`
construct as a literal SQL string. This approach still has the
downside that the rendered SQL construct may not be backend-agnostic
in all cases, so there is still a need for manual intervention in that
small number of cases, but overall the majority of cases should work
correctly now. Big thanks to Carlos Rivera for pull requests and
support on this.
.. change::
:tags: feature
SQLAlchemy's testing infrastructure is now used to run tests.
This system supports both nose and pytest and opens the way
for Alembic testing to support any number of backends, parallel
testing, and 3rd party dialect testing.
.. change::
:tags: changed, compatibility
Minimum SQLAlchemy version is now 0.7.6, however at least
0.8.4 is strongly recommended. The overhaul of the test suite
allows for fully passing tests on all SQLAlchemy versions
from 0.7.6 on forward.
.. change::
:tags: bug, operations
The "match" keyword is not sent to :class:`.ForeignKeyConstraint`
by :meth:`.Operations.create_foreign_key` when SQLAlchemy 0.7 is in use;
this keyword was added to SQLAlchemy as of 0.8.0.
.. changelog::
:version: 0.6.7
:released: September 9, 2014
.. change::
:tags: bug, mssql
Fixed bug in MSSQL dialect where "rename table" wasn't using
``sp_rename()`` as is required on SQL Server. Pull request courtesy
Łukasz Bołdys.
.. change::
:tags: feature
:tickets: 222
Added support for functional indexes when using the
:meth:`.Operations.create_index` directive. Within the list of columns,
the SQLAlchemy ``text()`` construct can be sent, embedding a literal
SQL expression; the :meth:`.Operations.create_index` will perform some hackery
behind the scenes to get the :class:`.Index` construct to cooperate.
This works around some current limitations in :class:`.Index`
which should be resolved on the SQLAlchemy side at some point.
.. changelog::
:version: 0.6.6
:released: August 7, 2014
.. change::
:tags: bug
:tickets: 95
A file named ``__init__.py`` in the ``versions/`` directory is now
ignored by Alembic when the collection of version files is retrieved.
Pull request courtesy Michael Floering.
.. change::
:tags: bug
Fixed Py3K bug where an attempt would be made to sort None against
string values when autogenerate would detect tables across multiple
schemas, including the default schema. Pull request courtesy
paradoxxxzero.
.. change::
:tags: bug
Autogenerate render will render the arguments within a Table construct
using ``*[...]`` when the number of columns/elements is greater than
255. Pull request courtesy Ryan P. Kelly.
.. change::
:tags: bug
Fixed bug where foreign key constraints would fail to render in
autogenerate when a schema name was present. Pull request courtesy
Andreas Zeidler.
.. change::
:tags: bug
:tickets: 212
Some deep-in-the-weeds fixes to try to get "server default" comparison
working better across platforms and expressions, in particular on
the Postgresql backend, mostly dealing with quoting/not quoting of various
expressions at the appropriate time and on a per-backend basis.
Repaired and tested support for such defaults as Postgresql interval
and array defaults.
.. change::
:tags: enhancement
:tickets: 209
When a run of Alembic command line fails due to ``CommandError``,
the output now prefixes the string with ``"FAILED:"``, and the error
is also written to the log output using ``log.error()``.
.. change::
:tags: bug
:tickets: 208
Liberalized even more the check for MySQL indexes that shouldn't be
counted in autogenerate as "drops"; this time it's been reported
that an implicitly created index might be named the same as a composite
foreign key constraint, and not the actual columns, so we now skip those
when detected as well.
.. change::
:tags: feature
Added a new accessor :attr:`.MigrationContext.config`, when used
in conjunction with a :class:`.EnvironmentContext` and
:class:`.Config`, this config will be returned. Patch
courtesy Marc Abramowitz.
.. changelog::
:version: 0.6.5
:released: May 3, 2014
.. change::
:tags: bug, autogenerate, mysql
:tickets: 202
This releases' "autogenerate index detection" bug, when a MySQL table
includes an Index with the same name as a column, autogenerate reported
it as an "add" even though its not; this is because we ignore reflected
indexes of this nature due to MySQL creating them implicitly. Indexes
that are named the same as a column are now ignored on
MySQL if we see that the backend is reporting that it already exists;
this indicates that we can still detect additions of these indexes
but not drops, as we cannot distinguish a backend index same-named
as the column as one that is user generated or mysql-generated.
.. change::
:tags: feature, environment
:tickets: 201
Added new feature :paramref:`.EnvironmentContext.configure.transaction_per_migration`,
which when True causes the BEGIN/COMMIT pair to incur for each migration
individually, rather than for the whole series of migrations. This is
to assist with some database directives that need to be within individual
transactions, without the need to disable transactional DDL entirely.
.. change::
:tags: bug, autogenerate
:tickets: 200
Fixed bug where the ``include_object()`` filter would not receive
the original :class:`.Column` object when evaluating a database-only
column to be dropped; the object would not include the parent
:class:`.Table` nor other aspects of the column that are important
for generating the "downgrade" case where the column is recreated.
.. change::
:tags: bug, environment
:tickets: 195
Fixed bug where :meth:`.EnvironmentContext.get_x_argument`
would fail if the :class:`.Config` in use didn't actually
originate from a command line call.
.. change::
:tags: bug, autogenerate
:tickets: 194
Fixed another bug regarding naming conventions, continuing
from :ticket:`183`, where add_index()
drop_index() directives would not correctly render the ``f()``
construct when the index contained a convention-driven name.
.. changelog::
:version: 0.6.4
:released: March 28, 2014
.. change::
:tags: bug, mssql
:tickets: 186
Added quoting to the table name when the special EXEC is run to
drop any existing server defaults or constraints when the
:paramref:`.Operations.drop_column.mssql_drop_check` or
:paramref:`.Operations.drop_column.mssql_drop_default`
arguments are used.
.. change::
:tags: bug, mysql
:tickets: 103
Added/fixed support for MySQL "SET DEFAULT" / "DROP DEFAULT" phrases,
which will now be rendered if only the server default is changing
or being dropped (e.g. specify None to alter_column() to indicate
"DROP DEFAULT"). Also added support for rendering MODIFY rather than
CHANGE when the column name isn't changing.
.. change::
:tags: bug
:tickets: 190
Added support for the ``initially``, ``match`` keyword arguments
as well as dialect-specific keyword arguments to
:meth:`.Operations.create_foreign_key`.
:tags: feature
:tickets: 163
Altered the support for "sourceless" migration files (e.g. only
.pyc or .pyo present) so that the flag "sourceless=true" needs to
be in alembic.ini for this behavior to take effect.
.. change::
:tags: bug, mssql
:tickets: 185
The feature that keeps on giving, index/unique constraint autogenerate
detection, has even more fixes, this time to accommodate database dialects
that both don't yet report on unique constraints, but the backend
does report unique constraints as indexes. The logic
Alembic uses to distinguish between "this is an index!" vs.
"this is a unique constraint that is also reported as an index!" has now
been further enhanced to not produce unwanted migrations when the dialect
is observed to not yet implement get_unique_constraints() (e.g. mssql).
Note that such a backend will no longer report index drops for unique
indexes, as these cannot be distinguished from an unreported unique
index.
.. change::
:tags: bug
:tickets: 183
Extensive changes have been made to more fully support SQLAlchemy's new
naming conventions feature. Note that while SQLAlchemy has added this
feature as of 0.9.2, some additional fixes in 0.9.4 are needed to
resolve some of the issues:
1. The :class:`.Operations` object now takes into account the naming
conventions that are present on the :class:`.MetaData` object that's
associated using :paramref:`~.EnvironmentContext.configure.target_metadata`.
When :class:`.Operations` renders a constraint directive like
``ADD CONSTRAINT``, it now will make use of this naming convention
when it produces its own temporary :class:`.MetaData` object.
2. Note however that the autogenerate feature in most cases generates
constraints like foreign keys and unique constraints with the
final names intact; the only exception are the constraints implicit
with a schema-type like Boolean or Enum. In most of these cases,
the naming convention feature will not take effect for these constraints
and will instead use the given name as is, with one exception....
3. Naming conventions which use the ``"%(constraint_name)s"`` token, that
is, produce a new name that uses the original name as a component,
will still be pulled into the naming convention converter and be
converted. The problem arises when autogenerate renders a constraint
with it's already-generated name present in the migration file's source
code, the name will be doubled up at render time due to the combination
of #1 and #2. So to work around this, autogenerate now renders these
already-tokenized names using the new :meth:`.Operations.f` component.
This component is only generated if **SQLAlchemy 0.9.4** or greater
is in use.
Therefore it is highly recommended that an upgrade to Alembic 0.6.4
be accompanied by an upgrade of SQLAlchemy 0.9.4, if the new naming
conventions feature is used.
.. seealso::
:ref:`autogen_naming_conventions`
.. change::
:tags: bug
:tickets: 160
Suppressed IOErrors which can raise when program output pipe
is closed under a program like ``head``; however this only
works on Python 2. On Python 3, there is not yet a known way to
suppress the BrokenPipeError warnings without prematurely terminating
the program via signals.
.. change::
:tags: bug
:tickets: 179
Fixed bug where :meth:`.Operations.bulk_insert` would not function
properly when :meth:`.Operations.inline_literal` values were used,
either in --sql or non-sql mode. The values will now render
directly in --sql mode. For compatibility with "online" mode,
a new flag :paramref:`~.Operations.bulk_insert.multiinsert`
can be set to False which will cause each parameter set to be
compiled and executed with individual INSERT statements.
.. change::
:tags: bug, py3k
:tickets: 175
Fixed a failure of the system that allows "legacy keyword arguments"
to be understood, which arose as of a change in Python 3.4 regarding
decorators. A workaround is applied that allows the code to work
across Python 3 versions.
.. change::
:tags: feature
The :func:`.command.revision` command now returns the :class:`.Script`
object corresponding to the newly generated revision. From this
structure, one can get the revision id, the module documentation,
and everything else, for use in scripts that call upon this command.
Pull request courtesy Robbie Coomber.
.. changelog::
:version: 0.6.3
:released: February 2, 2014
.. change::
:tags: bug
:tickets: 172
Added a workaround for when we call ``fcntl.ioctl()`` to get at
``TERMWIDTH``; if the function returns zero, as is reported to occur
in some pseudo-ttys, the message wrapping system is disabled in the
same way as if ``ioctl()`` failed.
.. change::
:tags: feature
:tickets: 171
Added new argument
:paramref:`.EnvironmentContext.configure.user_module_prefix`.
This prefix is applied when autogenerate renders a user-defined type,
which here is defined as any type that is from a module outside of the
``sqlalchemy.`` hierarchy. This prefix defaults to ``None``, in
which case the :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
is used, thus preserving the current behavior.
.. change::
:tags: bug
:tickets: 170
Added support for autogenerate covering the use case where :class:`.Table`
objects specified in the metadata have an explicit ``schema`` attribute
whose name matches that of the connection's default schema
(e.g. "public" for Postgresql). Previously, it was assumed that "schema"
was ``None`` when it matched the "default" schema, now the comparison
adjusts for this.
.. change::
:tags: bug
The :func:`.compare_metadata` public API function now takes into
account the settings for
:paramref:`.EnvironmentContext.configure.include_object`,
:paramref:`.EnvironmentContext.configure.include_symbol`,
and :paramref:`.EnvironmentContext.configure.include_schemas`, in the
same way that the ``--autogenerate`` command does. Pull
request courtesy Roman Podoliaka.
.. change::
:tags: bug
:tickets: 168
Calling :func:`.bulk_insert` with an empty list will not emit any
commands on the current connection. This was already the case with
``--sql`` mode, so is now the case with "online" mode.
.. change::
:tags: bug
Enabled schema support for index and unique constraint autodetection;
previously these were non-functional and could in some cases lead to
attribute errors. Pull request courtesy Dimitris Theodorou.
.. change::
:tags: bug
:tickets: 164
More fixes to index autodetection; indexes created with expressions
like DESC or functional indexes will no longer cause AttributeError
exceptions when attempting to compare the columns.
.. change::
:tags: feature
:tickets: 163
The :class:`.ScriptDirectory` system that loads migration files
from a ``versions/`` directory now supports so-called
"sourceless" operation, where the ``.py`` files are not present
and instead ``.pyc`` or ``.pyo`` files are directly present where
the ``.py`` files should be. Note that while Python 3.3 has a
new system of locating ``.pyc``/``.pyo`` files within a directory
called ``__pycache__`` (e.g. PEP-3147), PEP-3147 maintains
support for the "source-less imports" use case, where the
``.pyc``/``.pyo`` are in present in the "old" location, e.g. next
to the ``.py`` file; this is the usage that's supported even when
running Python3.3.
.. changelog::
:version: 0.6.2
:released: Fri Dec 27 2013
.. change::
:tags: bug
Autogenerate for ``op.create_table()`` will not include a
``PrimaryKeyConstraint()`` that has no columns.
.. change::
:tags: bug
Fixed bug in the not-internally-used :meth:`.ScriptDirectory.get_base`
method which would fail if called on an empty versions directory.
.. change::
:tags: bug
:tickets: 157
An almost-rewrite of the new unique constraint/index autogenerate
detection, to accommodate a variety of issues. The emphasis is on
not generating false positives for those cases where no net change
is present, as these errors are the ones that impact all autogenerate
runs:
* Fixed an issue with unique constraint autogenerate detection where
a named ``UniqueConstraint`` on both sides with column changes would
render with the "add" operation before the "drop", requiring the
user to reverse the order manually.
* Corrected for MySQL's apparent addition of an implicit index
for a foreign key column, so that it doesn't show up as "removed".
This required that the index/constraint autogen system query the
dialect-specific implementation for special exceptions.
* reworked the "dedupe" logic to accommodate MySQL's bi-directional
duplication of unique indexes as unique constraints, and unique
constraints as unique indexes. Postgresql's slightly different
logic of duplicating unique constraints into unique indexes
continues to be accommodated as well. Note that a unique index
or unique constraint removal on a backend that duplicates these may
show up as a distinct "remove_constraint()" / "remove_index()" pair,
which may need to be corrected in the post-autogenerate if multiple
backends are being supported.
* added another dialect-specific exception to the SQLite backend
when dealing with unnamed unique constraints, as the backend can't
currently report on constraints that were made with this technique,
hence they'd come out as "added" on every run.
* the ``op.create_table()`` directive will be auto-generated with
the ``UniqueConstraint`` objects inline, but will not double them
up with a separate ``create_unique_constraint()`` call, which may
have been occurring. Indexes still get rendered as distinct
``op.create_index()`` calls even when the corresponding table was
created in the same script.
* the inline ``UniqueConstraint`` within ``op.create_table()`` includes
all the options like ``deferrable``, ``initially``, etc. Previously
these weren't rendering.
.. change::
:tags: feature, mssql
Added new argument ``mssql_drop_foreign_key`` to
:meth:`.Operations.drop_column`. Like ``mssql_drop_default``
and ``mssql_drop_check``, will do an inline lookup for a
single foreign key which applies to this column, and drop it.
For a column with more than one FK, you'd still need to explicitly
use :meth:`.Operations.drop_constraint` given the name,
even though only MSSQL has this limitation in the first place.
.. change::
:tags: bug, mssql
The MSSQL backend will add the batch separator (e.g. ``"GO"``)
in ``--sql`` mode after the final ``COMMIT`` statement, to ensure
that statement is also processed in batch mode. Courtesy
Derek Harland.
.. changelog::
:version: 0.6.1
:released: Wed Nov 27 2013
.. change::
:tags: bug, mysql
:tickets: 152
Fixed bug where :func:`.op.alter_column` in the MySQL dialect
would fail to apply quotes to column names that had mixed casing
or spaces.
.. change::
:tags: feature
Expanded the size of the "slug" generated by "revision" to 40
characters, which is also configurable by new field
``truncate_slug_length``; and also split on the word rather than the
character; courtesy Frozenball.
.. change::
:tags: bug
:tickets: 135
Fixed the output wrapping for Alembic message output, so that
we either get the terminal width for "pretty printing" with
indentation, or if not we just output the text as is; in any
case the text won't be wrapped too short.
.. change::
:tags: bug
Fixes to Py3k in-place compatibility regarding output encoding and related;
the use of the new io.* package introduced some incompatibilities on Py2k.
These should be resolved, due to the introduction of new adapter types
for translating from io.* to Py2k file types, StringIO types.
Thanks to Javier Santacruz for help with this.
.. change::
:tags: bug
:tickets: 145
Fixed py3k bug where the wrong form of ``next()`` was being called
when using the list_templates command. Courtesy Chris Wilkes.
.. change::
:tags: feature
:tickets: 107
Support for autogeneration detection and rendering of indexes and
unique constraints has been added. The logic goes through some effort
in order to differentiate between true unique constraints and
unique indexes, where there are some quirks on backends like Postgresql.
The effort here in producing the feature and tests is courtesy of IJL.
.. change::
:tags: bug
Fixed bug introduced by new ``include_object`` argument where the
inspected column would be misinterpreted when using a user-defined
type comparison function, causing a KeyError or similar expression-related
error. Fix courtesy Maarten van Schaik.
.. change::
:tags: bug
Added the "deferrable" keyword argument to :func:`.op.create_foreign_key`
so that ``DEFERRABLE`` constraint generation is supported; courtesy
Pedro Romano.
.. change::
:tags: bug
:tickets: 137
Ensured that strings going to stdout go through an encode/decode phase,
so that any non-ASCII characters get to the output stream correctly
in both Py2k and Py3k. Also added source encoding detection using
Mako's parse_encoding() routine in Py2k so that the __doc__ of a
non-ascii revision file can be treated as unicode in Py2k.
.. changelog::
:version: 0.6.0
:released: Fri July 19 2013
.. change::
:tags: feature
:tickets: 101
Added new kw argument to :meth:`.EnvironmentContext.configure`
``include_object``. This is a more flexible version of the
``include_symbol`` argument which allows filtering of columns as well as tables
from the autogenerate process,
and in the future will also work for types, constraints and
other constructs. The fully constructed schema object is passed,
including its name and type as well as a flag indicating if the object
is from the local application metadata or is reflected.
.. change::
:tags: feature
The output of the ``alembic history`` command is now
expanded to show information about each change on multiple
lines, including the full top message,
resembling the formatting of git log.
.. change::
:tags: feature
Added :attr:`alembic.config.Config.cmd_opts` attribute,
allows access to the ``argparse`` options passed to the
``alembic`` runner.
.. change::
:tags: feature
:tickets: 120
Added new command line argument ``-x``, allows extra arguments
to be appended to the command line which can be consumed
within an ``env.py`` script by looking at
``context.config.cmd_opts.x``, or more simply a new
method :meth:`.EnvironmentContext.get_x_argument`.
.. change::
:tags: bug
:tickets: 125
Added support for options like "name" etc. to be rendered
within CHECK constraints in autogenerate. Courtesy
Sok Ann Yap.
.. change::
:tags: misc
Source repository has been moved from Mercurial to Git.
.. change::
:tags: bug
Repaired autogenerate rendering of ForeignKeyConstraint
to include use_alter argument, if present.
.. change::
:tags: feature
Added ``-r`` argument to ``alembic history`` command,
allows specification of ``[start]:[end]`` to view
a slice of history. Accepts revision numbers, symbols
"base", "head", a new symbol "current" representing the
current migration, as well as relative ranges for one
side at a time (i.e. ``-r-5:head``, ``-rcurrent:+3``).
Courtesy Atsushi Odagiri for this feature.
.. change::
:tags: feature
:tickets: 55
Source base is now in-place for Python 2.6 through
3.3, without the need for 2to3. Support for Python 2.5
and below has been dropped. Huge thanks to
Hong Minhee for all the effort on this!
.. changelog::
:version: 0.5.0
:released: Thu Apr 4 2013
.. note::
Alembic 0.5.0 now requires at least
version 0.7.3 of SQLAlchemy to run properly.
Support for 0.6 has been dropped.
.. change::
:tags: feature
:tickets: 76
Added ``version_table_schema`` argument
to :meth:`.EnvironmentContext.configure`,
complements the ``version_table`` argument to
set an optional remote schema for the version
table. Courtesy Christian Blume.
.. change::
:tags: bug, postgresql
:tickets: 32
Fixed format of RENAME for table that includes
schema with Postgresql; the schema name shouldn't
be in the "TO" field.
.. change::
:tags: feature
:tickets: 90
Added ``output_encoding`` option to
:meth:`.EnvironmentContext.configure`,
used with ``--sql`` mode to apply an encoding
to the output stream.
.. change::
:tags: feature
:tickets: 93
Added :meth:`.Operations.create_primary_key`
operation, will generate an ADD CONSTRAINT
for a primary key.
.. change::
:tags: bug, mssql
:tickets: 109
Fixed bug whereby double quoting would be applied
to target column name during an ``sp_rename``
operation.
.. change::
:tags: bug, sqlite, mysql
:tickets: 112
transactional_ddl flag for SQLite, MySQL dialects
set to False. MySQL doesn't support it,
SQLite does but current pysqlite driver does not.
.. change::
:tags: feature
:tickets: 115
upgrade and downgrade commands will list the
first line of the docstring out next to the
version number. Courtesy Hong Minhee.
.. change::
:tags: feature
Added --head-only option to "alembic current",
will print current version plus the symbol
"(head)" if this version is the head or not.
Courtesy Charles-Axel Dein.
.. change::
:tags: bug
:tickets: 110
Autogenerate will render additional table keyword
arguments like "mysql_engine" and others within
op.create_table().
.. change::
:tags: feature
:tickets: 108
The rendering of any construct during autogenerate
can be customized, in particular to allow special rendering
for user-defined column, constraint subclasses, using new
``render_item`` argument to
:meth:`.EnvironmentContext.configure`.
.. change::
:tags: bug
Fixed bug whereby create_index()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
This is the same issue that was fixed for unique
constraints in version 0.3.2.
.. change::
:tags: bug
Worked around a backwards-incompatible regression in Python3.3
regarding argparse; running "alembic" with no arguments
now yields an informative error in py3.3 as with all previous versions.
Courtesy Andrey Antukh.
.. change::
:tags: change
SQLAlchemy 0.6 is no longer supported by Alembic - minimum version is 0.7.3,
full support is as of 0.7.9.
.. change::
:tags: bug
:tickets: 104
A host of argument name changes within migration
operations for consistency. Keyword arguments
will continue to work on the old name for backwards compatibility,
however required positional arguments will not:
:meth:`.Operations.alter_column` - ``name`` -> ``new_column_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.create_index` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_index` - ``tablename`` -> ``table_name`` - old
name will work for backwards compatibility.
:meth:`.Operations.drop_constraint` - ``tablename`` -> ``table_name`` -
argument is positional.
:meth:`.Operations.drop_constraint` - ``type`` -> ``type_`` - old
name will work for backwards compatibility
.. changelog::
:version: 0.4.2
:released: Fri Jan 11 2013
.. change::
:tags: bug, autogenerate
:tickets: 99
Fixed bug where autogenerate would fail if a Column
to be added to a table made use of the ".key" parameter.
.. change::
:tags: bug, sqlite
:tickets: 98
The "implicit" constraint generated by a
type such as Boolean or Enum will not generate an
ALTER statement when run on SQlite, which does not
support ALTER for the purpose of adding/removing
constraints separate from the column def itself.
While SQLite supports adding a CHECK constraint
at the column level, SQLAlchemy would need modification
to support this.
A warning is emitted indicating this
constraint cannot be added in this scenario.
.. change::
:tags: bug
:tickets: 96
Added a workaround to setup.py to prevent
"NoneType" error from occurring when
"setup.py test" is run.
.. change::
:tags: bug
:tickets: 96
Added an append_constraint() step to each
condition within
test_autogenerate:AutogenRenderTest.test_render_fk_constraint_kwarg
if the SQLAlchemy version is less than 0.8, as ForeignKeyConstraint
does not auto-append prior to 0.8.
.. change::
:tags: feature
:tickets: 96
Added a README.unittests with instructions for running the test
suite fully.
.. changelog::
:version: 0.4.1
:released: Sun Dec 9 2012
.. change::
:tags: bug
:tickets: 92
Added support for autogenerate render of
ForeignKeyConstraint options onupdate,
ondelete, initially, and deferred.
.. change::
:tags: bug
:tickets: 94
Autogenerate will include "autoincrement=False"
in the rendered table metadata
if this flag was set to false on the source
:class:`.Column` object.
.. change::
:tags: feature
:tickets: 66
Explicit error message describing the case
when downgrade --sql is used without specifying
specific start/end versions.
.. change::
:tags: bug
:tickets: 81
Removed erroneous "emit_events" attribute
from operations.create_table() documentation.
.. change::
:tags: bug
:tickets:
Fixed the minute component in file_template
which returned the month part of the create date.
.. changelog::
:version: 0.4.0
:released: Mon Oct 01 2012
.. change::
:tags: feature
:tickets: 33
Support for tables in alternate schemas
has been added fully to all operations, as well as to
the autogenerate feature. When using autogenerate,
specifying the flag include_schemas=True to
Environment.configure() will also cause autogenerate
to scan all schemas located by Inspector.get_schema_names(),
which is supported by *some* (but not all)
SQLAlchemy dialects including Postgresql.
*Enormous* thanks to Bruno Binet for a huge effort
in implementing as well as writing tests. .
.. change::
:tags: feature
:tickets: 70
The command line runner has been organized
into a reusable CommandLine object, so that other
front-ends can re-use the argument parsing built
in.
.. change::
:tags: feature
:tickets: 43
Added "stdout" option to Config, provides
control over where the "print" output of commands like
"history", "init", "current" etc. are sent.
.. change::
:tags: bug
:tickets: 71
Fixed the "multidb" template which was badly out
of date. It now generates revision files using
the configuration to determine the different
upgrade_<xyz>() methods needed as well, instead of
needing to hardcode these. Huge thanks to
BryceLohr for doing the heavy lifting here.
.. change::
:tags: bug
:tickets: 72
Fixed the regexp that was checking for .py files
in the version directory to allow any .py file through.
Previously it was doing some kind of defensive checking,
probably from some early notions of how this directory
works, that was prohibiting various filename patterns
such as those which begin with numbers.
.. change::
:tags: bug
:tickets:
Fixed MySQL rendering for server_default which
didn't work if the server_default was a generated
SQL expression. Courtesy Moriyoshi Koizumi.
.. change::
:tags: feature
:tickets:
Added support for alteration of MySQL
columns that have AUTO_INCREMENT, as well as enabling
this flag. Courtesy Moriyoshi Koizumi.
.. changelog::
:version: 0.3.6
:released: Wed Aug 15 2012
.. change::
:tags: feature
:tickets: 27
Added include_symbol option to
EnvironmentContext.configure(),
specifies a callable which will include/exclude tables
in their entirety from the autogeneration process
based on name.
.. change::
:tags: feature
:tickets: 59
Added year, month, day, hour, minute, second
variables to file_template.
.. change::
:tags: feature
:tickets:
Added 'primary' to the list of constraint types
recognized for MySQL drop_constraint().
.. change::
:tags: feature
:tickets:
Added --sql argument to the "revision" command,
for the use case where the "revision_environment"
config option is being used but SQL access isn't
desired.
.. change::
:tags: bug
:tickets:
Repaired create_foreign_key() for
self-referential foreign keys, which weren't working
at all.
.. change::
:tags: bug
:tickets: 63
'alembic' command reports an informative
error message when the configuration is missing
the 'script_directory' key.
.. change::
:tags: bug
:tickets: 62
Fixes made to the constraints created/dropped
alongside so-called "schema" types such as
Boolean and Enum. The create/drop constraint logic
does not kick in when using a dialect that doesn't
use constraints for these types, such as postgresql,
even when existing_type is specified to
alter_column(). Additionally, the constraints
are not affected if existing_type is passed but
type\_ is not, i.e. there's no net change
in type.
.. change::
:tags: bug
:tickets: 66
Improved error message when specifying
non-ordered revision identifiers to cover
the case when the "higher" rev is None,
improved message overall.
.. changelog::
:version: 0.3.5
:released: Sun Jul 08 2012
.. change::
:tags: bug
:tickets: 31
Fixed issue whereby reflected server defaults
wouldn't be quoted correctly; uses repr() now.
.. change::
:tags: bug
:tickets: 58
Fixed issue whereby when autogenerate would
render create_table() on the upgrade side for a
table that has a Boolean type, an unnecessary
CheckConstraint() would be generated.
.. change::
:tags: feature
:tickets:
Implemented SQL rendering for
CheckConstraint() within autogenerate upgrade,
including for literal SQL as well as SQL Expression
Language expressions.
.. changelog::
:version: 0.3.4
:released: Sat Jun 02 2012
.. change::
:tags: bug
:tickets:
Fixed command-line bug introduced by the
"revision_environment" feature.
.. changelog::
:version: 0.3.3
:released: Sat Jun 02 2012
.. change::
:tags: feature
:tickets:
New config argument
"revision_environment=true", causes env.py to
be run unconditionally when the "revision" command
is run, to support script.py.mako templates with
dependencies on custom "template_args".
.. change::
:tags: feature
:tickets:
Added "template_args" option to configure()
so that an env.py can add additional arguments
to the template context when running the
"revision" command. This requires either --autogenerate
or the configuration directive "revision_environment=true".
.. change::
:tags: bug
:tickets: 44
Added "type" argument to op.drop_constraint(),
and implemented full constraint drop support for
MySQL. CHECK and undefined raise an error.
MySQL needs the constraint type
in order to emit a DROP CONSTRAINT.
.. change::
:tags: feature
:tickets: 34
Added version_table argument to
EnvironmentContext.configure(), allowing for the
configuration of the version table name.
.. change::
:tags: feature
:tickets:
Added support for "relative" migration
identifiers, i.e. "alembic upgrade +2",
"alembic downgrade -1". Courtesy
Atsushi Odagiri for this feature.
.. change::
:tags: bug
:tickets: 49
Fixed bug whereby directories inside of
the template directories, such as __pycache__
on Pypy, would mistakenly be interpreted as
files which are part of the template.
.. changelog::
:version: 0.3.2
:released: Mon Apr 30 2012
.. change::
:tags: feature
:tickets: 40
Basic support for Oracle added,
courtesy shgoh.
.. change::
:tags: feature
:tickets:
Added support for UniqueConstraint
in autogenerate, courtesy Atsushi Odagiri
.. change::
:tags: bug
:tickets:
Fixed support of schema-qualified
ForeignKey target in column alter operations,
courtesy Alexander Kolov.
.. change::
:tags: bug
:tickets:
Fixed bug whereby create_unique_constraint()
would include in the constraint columns that
are added to all Table objects using events,
externally to the generation of the constraint.
.. changelog::
:version: 0.3.1
:released: Sat Apr 07 2012
.. change::
:tags: bug
:tickets: 41
bulk_insert() fixes:
1. bulk_insert() operation was
not working most likely since the 0.2 series
when used with an engine.
2. Repaired bulk_insert() to complete when
used against a lower-case-t table and executing
with only one set of parameters, working
around SQLAlchemy bug #2461 in this regard.
3. bulk_insert() uses "inline=True" so that phrases
like RETURNING and such don't get invoked for
single-row bulk inserts.
4. bulk_insert() will check that you're passing
a list of dictionaries in, raises TypeError
if not detected.
.. changelog::
:version: 0.3.0
:released: Thu Apr 05 2012
.. change::
:tags: general
:tickets:
The focus of 0.3 is to clean up
and more fully document the public API of Alembic,
including better accessors on the MigrationContext
and ScriptDirectory objects. Methods that are
not considered to be public on these objects have
been underscored, and methods which should be public
have been cleaned up and documented, including:
MigrationContext.get_current_revision()
ScriptDirectory.iterate_revisions()
ScriptDirectory.get_current_head()
ScriptDirectory.get_heads()
ScriptDirectory.get_base()
ScriptDirectory.generate_revision()
.. change::
:tags: feature
:tickets:
Added a bit of autogenerate to the
public API in the form of the function
alembic.autogenerate.compare_metadata.
.. changelog::
:version: 0.2.2
:released: Mon Mar 12 2012
.. change::
:tags: feature
:tickets:
Informative error message when op.XYZ
directives are invoked at module import time.
.. change::
:tags: bug
:tickets: 35
Fixed inappropriate direct call to
util.err() and therefore sys.exit()
when Config failed to locate the
config file within library usage.
.. change::
:tags: bug
:tickets:
Autogenerate will emit CREATE TABLE
and DROP TABLE directives according to
foreign key dependency order.
.. change::
:tags: bug
:tickets:
implement 'tablename' parameter on
drop_index() as this is needed by some
backends.
.. change::
:tags: feature
:tickets:
Added execution_options parameter
to op.execute(), will call execution_options()
on the Connection before executing.
The immediate use case here is to allow
access to the new no_parameters option
in SQLAlchemy 0.7.6, which allows
some DBAPIs (psycopg2, MySQLdb) to allow
percent signs straight through without
escaping, thus providing cross-compatible
operation with DBAPI execution and
static script generation.
.. change::
:tags: bug
:tickets:
setup.py won't install argparse if on
Python 2.7/3.2
.. change::
:tags: feature
:tickets: 29
script_location can be interpreted
by pkg_resources.resource_filename(), if
it is a non-absolute URI that contains
colons. This scheme is the same
one used by Pyramid.
.. change::
:tags: feature
:tickets:
added missing support for
onupdate/ondelete flags for
ForeignKeyConstraint, courtesy Giacomo Bagnoli
.. change::
:tags: bug
:tickets: 30
fixed a regression regarding an autogenerate
error message, as well as various glitches
in the Pylons sample template. The Pylons sample
template requires that you tell it where to
get the Engine from now. courtesy
Marcin Kuzminski
.. change::
:tags: bug
:tickets:
drop_index() ensures a dummy column
is added when it calls "Index", as SQLAlchemy
0.7.6 will warn on index with no column names.
.. changelog::
:version: 0.2.1
:released: Tue Jan 31 2012
.. change::
:tags: bug
:tickets: 26
Fixed the generation of CHECK constraint,
regression from 0.2.0
.. changelog::
:version: 0.2.0
:released: Mon Jan 30 2012
.. change::
:tags: feature
:tickets: 19
API rearrangement allows everything
Alembic does to be represented by contextual
objects, including EnvironmentContext,
MigrationContext, and Operations. Other
libraries and applications can now use
things like "alembic.op" without relying
upon global configuration variables.
The rearrangement was done such that
existing migrations should be OK,
as long as they use the pattern
of "from alembic import context" and
"from alembic import op", as these
are now contextual objects, not modules.
.. change::
:tags: feature
:tickets: 24
The naming of revision files can
now be customized to be some combination
of "rev id" and "slug", the latter of which
is based on the revision message.
By default, the pattern "<rev>_<slug>"
is used for new files. New script files
should include the "revision" variable
for this to work, which is part of
the newer script.py.mako scripts.
.. change::
:tags: bug
:tickets: 25
env.py templates call
connection.close() to better support
programmatic usage of commands; use
NullPool in conjunction with create_engine()
as well so that no connection resources
remain afterwards.
.. change::
:tags: bug
:tickets: 22
fix the config.main() function to honor
the arguments passed, remove no longer used
"scripts/alembic" as setuptools creates this
for us.
.. change::
:tags: bug
:tickets:
Fixed alteration of column type on
MSSQL to not include the keyword "TYPE".
.. change::
:tags: feature
:tickets: 23
Can create alembic.config.Config
with no filename, use set_main_option()
to add values. Also added set_section_option()
which will add sections.
.. changelog::
:version: 0.1.1
:released: Wed Jan 04 2012
.. change::
:tags: bug
:tickets:
Clean up file write operations so that
file handles are closed.
.. change::
:tags: feature
:tickets:
PyPy is supported.
.. change::
:tags: feature
:tickets:
Python 2.5 is supported, needs
__future__.with_statement
.. change::
:tags: bug
:tickets:
Fix autogenerate so that "pass" is
generated between the two comments
if no net migrations were present.
.. change::
:tags: bug
:tickets: 16
Fix autogenerate bug that prevented
correct reflection of a foreign-key
referenced table in the list of "to remove".
.. change::
:tags: bug
:tickets: 17
Fix bug where create_table() didn't
handle self-referential foreign key
correctly
.. change::
:tags: bug
:tickets: 18
Default prefix for autogenerate
directives is "op.", matching the
mako templates.
.. change::
:tags: feature
:tickets: 18
Add alembic_module_prefix argument
to configure() to complement
sqlalchemy_module_prefix.
.. change::
:tags: bug
:tickets: 14
fix quotes not being rendered in
ForeignKeConstraint during
autogenerate
.. changelog::
:version: 0.1.0
:released: Wed Nov 30 2011
.. change::
:tags:
:tickets:
Initial release. Status of features:
.. change::
:tags:
:tickets:
Alembic is used in at least one production
environment, but should still be considered
ALPHA LEVEL SOFTWARE as of this release,
particularly in that many features are expected
to be missing / unimplemented. Major API
changes are not anticipated but for the moment
nothing should be assumed.
The author asks that you *please* report all
issues, missing features, workarounds etc.
to the bugtracker.
.. change::
:tags:
:tickets:
Python 3 is supported and has been tested.
.. change::
:tags:
:tickets:
The "Pylons" and "MultiDB" environment templates
have not been directly tested - these should be
considered to be samples to be modified as
needed. Multiple database support itself
is well tested, however.
.. change::
:tags:
:tickets:
Postgresql and MS SQL Server environments
have been tested for several weeks in a production
environment. In particular, some involved workarounds
were implemented to allow fully-automated dropping
of default- or constraint-holding columns with
SQL Server.
.. change::
:tags:
:tickets:
MySQL support has also been implemented to a
basic degree, including MySQL's awkward style
of modifying columns being accommodated.
.. change::
:tags:
:tickets:
Other database environments not included among
those three have *not* been tested, *at all*. This
includes Firebird, Oracle, Sybase. Adding
support for these backends should be
straightforward. Please report all missing/
incorrect behaviors to the bugtracker! Patches
are welcome here but are optional - please just
indicate the exact format expected by the target
database.
.. change::
:tags:
:tickets:
SQLite, as a backend, has almost no support for
schema alterations to existing databases. The author
would strongly recommend that SQLite not be used in
a migration context - just dump your SQLite database
into an intermediary format, then dump it back
into a new schema. For dev environments, the
dev installer should be building the whole DB from
scratch. Or just use Postgresql, which is a much
better database for non-trivial schemas.
Requests for full ALTER support on SQLite should be
reported to SQLite's bug tracker at
http://www.sqlite.org/src/wiki?name=Bug+Reports,
as Alembic will not be implementing the
"rename the table to a temptable then copy the
data into a new table" workaround.
Note that Alembic will at some point offer an
extensible API so that you can implement commands
like this yourself.
.. change::
:tags:
:tickets:
Well-tested directives include add/drop table, add/drop
column, including support for SQLAlchemy "schema"
types which generate additional CHECK
constraints, i.e. Boolean, Enum. Other directives not
included here have *not* been strongly tested
in production, i.e. rename table, etc.
.. change::
:tags:
:tickets:
Both "online" and "offline" migrations, the latter
being generated SQL scripts to hand off to a DBA,
have been strongly production tested against
Postgresql and SQL Server.
.. change::
:tags:
:tickets:
Modify column type, default status, nullable, is
functional and tested across PG, MSSQL, MySQL,
but not yet widely tested in production usage.
.. change::
:tags:
:tickets:
Many migrations are still outright missing, i.e.
create/add sequences, etc. As a workaround,
execute() can be used for those which are missing,
though posting of tickets for new features/missing
behaviors is strongly encouraged.
.. change::
:tags:
:tickets:
Autogenerate feature is implemented and has been
tested, though only a little bit in a production setting.
In particular, detection of type and server
default changes are optional and are off by default;
they can also be customized by a callable.
Both features work but can have surprises particularly
the disparity between BIT/TINYINT and boolean,
which hasn't yet been worked around, as well as
format changes performed by the database on defaults
when it reports back. When enabled, the PG dialect
will execute the two defaults to be compared to
see if they are equivalent. Other backends may
need to do the same thing.
The autogenerate feature only generates
"candidate" commands which must be hand-tailored
in any case, so is still a useful feature and
is safe to use. Please report missing/broken features
of autogenerate! This will be a great feature and
will also improve SQLAlchemy's reflection services.
.. change::
:tags:
:tickets:
Support for non-ASCII table, column and constraint
names is mostly nonexistent. This is also a
straightforward feature add as SQLAlchemy itself
supports unicode identifiers; Alembic itself will
likely need fixes to logging, column identification
by key, etc. for full support here.
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | As above | CaselIT | 5 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_batch.py | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import JSON
from sqlalchemy import MetaData
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import Text
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects import sqlite as sqlite_dialect
from sqlalchemy.schema import CreateIndex
from sqlalchemy.schema import CreateTable
from sqlalchemy.sql import column
from sqlalchemy.sql import text
from alembic import command
from alembic import testing
from alembic import util
from alembic.ddl import sqlite
from alembic.operations import Operations
from alembic.operations.batch import ApplyBatchImpl
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import exclusions
from alembic.testing import expect_raises_message
from alembic.testing import is_
from alembic.testing import mock
from alembic.testing import TestBase
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import op_fixture
from alembic.util import CommandError
from alembic.util import exc as alembic_exc
from alembic.util.sqla_compat import _NONE_NAME
from alembic.util.sqla_compat import _safe_commit_connection_transaction
from alembic.util.sqla_compat import _select
from alembic.util.sqla_compat import has_computed
from alembic.util.sqla_compat import has_identity
from alembic.util.sqla_compat import sqla_14
if has_computed:
from alembic.util.sqla_compat import Computed
if has_identity:
from alembic.util.sqla_compat import Identity
class BatchApplyTest(TestBase):
def setUp(self):
self.op = Operations(mock.Mock(opts={}))
self.impl = sqlite.SQLiteImpl(
sqlite_dialect.dialect(), None, False, False, None, {}
)
def _simple_fixture(self, table_args=(), table_kwargs={}, **kw):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String(10)),
Column("y", Integer),
)
return ApplyBatchImpl(
self.impl, t, table_args, table_kwargs, False, **kw
)
def _uq_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
UniqueConstraint("y", name="uq1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_table_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
CheckConstraint("y > 5", name="ck1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_col_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer, CheckConstraint("y > 5", name="ck1")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _ix_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
Index("ix1", "y"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _pk_fixture(self):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer),
Column("x", String()),
Column("y", Integer),
PrimaryKeyConstraint("id", name="mypk"),
)
return ApplyBatchImpl(self.impl, t, (), {}, False)
def _literal_ck_fixture(
self, copy_from=None, table_args=(), table_kwargs={}
):
m = MetaData()
if copy_from is not None:
t = copy_from
else:
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
CheckConstraint("email LIKE '%@%'"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _sql_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
)
t.append_constraint(CheckConstraint(t.c.email.like("%@%")))
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _multi_fk_fixture(self, table_args=(), table_kwargs={}, schema=None):
m = MetaData()
if schema:
schemaarg = "%s." % schema
else:
schemaarg = ""
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id_1", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_2", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_3", Integer),
Column("user_id_version", Integer),
ForeignKeyConstraint(
["user_id_3", "user_id_version"],
["%suser.id" % schemaarg, "%suser.id_version" % schemaarg],
),
schema=schema,
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id", name="ufk")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _selfref_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("parent_id", Integer, ForeignKey("tname.id")),
Column("data", String),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_no_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _enum_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", Enum("a", "b", "c", create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _server_default_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", String(), server_default=""),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _assert_impl(
self,
impl,
colnames=None,
ddl_contains=None,
ddl_not_contains=None,
dialect="default",
schema=None,
):
context = op_fixture(dialect=dialect)
impl._create(context.impl)
if colnames is None:
colnames = ["id", "x", "y"]
eq_(impl.new_table.c.keys(), colnames)
pk_cols = [col for col in impl.new_table.c if col.primary_key]
eq_(list(impl.new_table.primary_key), pk_cols)
create_stmt = str(
CreateTable(impl.new_table).compile(dialect=context.dialect)
)
create_stmt = re.sub(r"[\n\t]", "", create_stmt)
idx_stmt = ""
# create indexes; these should be created in terms of the
# final table name
impl.new_table.name = impl.table.name
for idx in impl._gather_indexes_from_both_tables():
idx_stmt += str(CreateIndex(idx).compile(dialect=context.dialect))
idx_stmt = re.sub(r"[\n\t]", "", idx_stmt)
# revert new table name to the temp name, assertions below
# are looking for the temp name
impl.new_table.name = ApplyBatchImpl._calc_temp_name(impl.table.name)
if ddl_contains:
assert ddl_contains in create_stmt + idx_stmt
if ddl_not_contains:
assert ddl_not_contains not in create_stmt + idx_stmt
expected = [create_stmt]
if schema:
args = {"schema": "%s." % schema}
else:
args = {"schema": ""}
args["temp_name"] = impl.new_table.name
args["colnames"] = ", ".join(
[
impl.new_table.c[name].name
for name in colnames
if name in impl.table.c
]
)
args["tname_colnames"] = ", ".join(
"CAST(%(schema)stname.%(name)s AS %(type)s) AS %(cast_label)s"
% {
"schema": args["schema"],
"name": name,
"type": impl.new_table.c[name].type,
"cast_label": name if sqla_14 else "anon_1",
}
if (
impl.new_table.c[name].type._type_affinity
is not impl.table.c[name].type._type_affinity
)
else "%(schema)stname.%(name)s"
% {"schema": args["schema"], "name": name}
for name in colnames
if name in impl.table.c
)
expected.extend(
[
"INSERT INTO %(schema)s%(temp_name)s (%(colnames)s) "
"SELECT %(tname_colnames)s FROM %(schema)stname" % args,
"DROP TABLE %(schema)stname" % args,
"ALTER TABLE %(schema)s%(temp_name)s "
"RENAME TO %(schema)stname" % args,
]
)
if idx_stmt:
expected.append(idx_stmt)
context.assert_(*expected)
return impl.new_table
def test_change_type(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", type_=String)
new_table = self._assert_impl(impl)
assert new_table.c.x.type._type_affinity is String
def test_rename_col(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", name="q")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.name, "q")
def test_rename_col_w_index(self):
impl = self._ix_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(
impl, ddl_contains="CREATE INDEX ix1 ON tname (y2)"
)
eq_(new_table.c.y.name, "y2")
def test_rename_col_w_uq(self):
impl = self._uq_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(impl, ddl_contains="UNIQUE (y2)")
eq_(new_table.c.y.name, "y2")
def test_alter_column_comment(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", comment="some comment")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.comment, "some comment")
def test_add_column_comment(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("q", Integer, comment="some comment"))
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "q"])
eq_(new_table.c.q.comment, "some comment")
def test_rename_col_boolean(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (bflag IN (0, 1)",
colnames=["id", "flag"],
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_change_type_schematype_to_non(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", type_=Integer)
new_table = self._assert_impl(
impl, colnames=["id", "flag"], ddl_not_contains="CHECK"
)
assert new_table.c.flag.type._type_affinity is Integer
# NOTE: we can't do test_change_type_non_to_schematype
# at this level because the "add_constraint" part of this
# comes from toimpl.py, which we aren't testing here
def test_rename_col_boolean_no_ck(self):
impl = self._boolean_no_ck_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl, ddl_not_contains="CHECK", colnames=["id", "flag"]
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
0,
)
def test_rename_col_enum(self):
impl = self._enum_fixture()
impl.alter_column("tname", "thing", name="thang")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (thang IN ('a', 'b', 'c')",
colnames=["id", "thing"],
)
eq_(new_table.c.thing.name, "thang")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_rename_col_literal_ck(self):
impl = self._literal_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
# note this is wrong, we don't dig into the SQL
impl,
ddl_contains="CHECK (email LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_literal_ck_workaround(self):
impl = self._literal_ck_fixture(
copy_from=Table(
"tname",
MetaData(),
Column("id", Integer, primary_key=True),
Column("email", String),
),
table_args=[CheckConstraint("emol LIKE '%@%'")],
)
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_sql_ck(self):
impl = self._sql_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_add_col(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("x", "id", "y")])
new_table = self._assert_impl(impl, colnames=["x", "id", "y"])
eq_(new_table.c.x.name, "x")
def test_add_col_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("id", "x", "g", "y")])
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col, insert_before="x")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_beginning(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="id")
new_table = self._assert_impl(impl, colnames=["g", "id", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_penultimate(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="x")
self._assert_impl(impl, colnames=["id", "x", "g", "y"])
def test_add_col_insert_after_end(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_plus_no_order(self):
impl = self._simple_fixture()
# operations.add_column produces a table
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer))
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_no_order_plus_insert_after(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", Column("q", Integer))
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_after="g")
new_table = self._assert_impl(
impl, colnames=["id", "g", "q", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_before="g")
new_table = self._assert_impl(
impl, colnames=["id", "q", "g", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_server_default(self):
impl = self._simple_fixture()
impl.alter_column("tname", "y", server_default="10")
new_table = self._assert_impl(impl, ddl_contains="DEFAULT '10'")
eq_(new_table.c.y.server_default.arg, "10")
def test_drop_server_default(self):
impl = self._server_default_fixture()
impl.alter_column("tname", "thing", server_default=None)
new_table = self._assert_impl(
impl, colnames=["id", "thing"], ddl_not_contains="DEFAULT"
)
eq_(new_table.c.thing.server_default, None)
def test_rename_col_pk(self):
impl = self._simple_fixture()
impl.alter_column("tname", "id", name="foobar")
new_table = self._assert_impl(
impl, ddl_contains="PRIMARY KEY (foobar)"
)
eq_(new_table.c.id.name, "foobar")
eq_(list(new_table.primary_key), [new_table.c.id])
def test_rename_col_fk(self):
impl = self._fk_fixture()
impl.alter_column("tname", "user_id", name="foobar")
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_contains='FOREIGN KEY(foobar) REFERENCES "user" (id)',
)
eq_(new_table.c.user_id.name, "foobar")
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_regen_multi_fk(self):
impl = self._multi_fk_fixture()
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES "user" (id, id_version)',
)
def test_regen_multi_fk_schema(self):
impl = self._multi_fk_fixture(schema="foo_schema")
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES foo_schema."user" (id, id_version)',
schema="foo_schema",
)
def test_do_not_add_existing_columns_columns(self):
impl = self._multi_fk_fixture()
meta = impl.table.metadata
cid = Column("id", Integer())
user = Table("user", meta, cid)
fk = [
c
for c in impl.unnamed_constraints
if isinstance(c, ForeignKeyConstraint)
]
impl._setup_referent(meta, fk[0])
is_(user.c.id, cid)
def test_drop_col(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("x"))
new_table = self._assert_impl(impl, colnames=["id", "y"])
assert "y" in new_table.c
assert "x" not in new_table.c
def test_drop_col_remove_pk(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("id"))
new_table = self._assert_impl(
impl, colnames=["x", "y"], ddl_not_contains="PRIMARY KEY"
)
assert "y" in new_table.c
assert "id" not in new_table.c
assert not new_table.primary_key
def test_drop_col_remove_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("user_id"))
new_table = self._assert_impl(
impl, colnames=["id", "email"], ddl_not_contains="FOREIGN KEY"
)
assert "user_id" not in new_table.c
assert not new_table.foreign_keys
def test_drop_col_retain_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("email"))
new_table = self._assert_impl(
impl,
colnames=["id", "user_id"],
ddl_contains='FOREIGN KEY(user_id) REFERENCES "user" (id)',
)
assert "email" not in new_table.c
assert new_table.c.user_id.foreign_keys
def test_drop_col_retain_fk_selfref(self):
impl = self._selfref_fk_fixture()
impl.drop_column("tname", column("data"))
new_table = self._assert_impl(impl, colnames=["id", "parent_id"])
assert "data" not in new_table.c
assert new_table.c.parent_id.foreign_keys
def test_add_fk(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("user_id", Integer))
fk = self.op.schema_obj.foreign_key_constraint(
"fk1", "tname", "user", ["user_id"], ["id"]
)
impl.add_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "x", "y", "user_id"],
ddl_contains="CONSTRAINT fk1 FOREIGN KEY(user_id) "
'REFERENCES "user" (id)',
)
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_drop_fk(self):
impl = self._named_fk_fixture()
fk = ForeignKeyConstraint([], [], name="ufk")
impl.drop_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_not_contains="CONSTRANT fk1",
)
eq_(list(new_table.foreign_keys), [])
def test_add_uq(self):
impl = self._simple_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.add_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT uq1 UNIQUE",
)
def test_drop_uq(self):
impl = self._uq_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.drop_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_ck_unnamed(self):
"""test for #1195"""
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint(_NONE_NAME, "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CHECK (y > 5)",
)
def test_add_ck(self):
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_table(self):
impl = self._named_ck_table_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_col(self):
impl = self._named_ck_col_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_create_index(self):
impl = self._simple_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.create_index(ix)
self._assert_impl(
impl, colnames=["id", "x", "y"], ddl_contains="CREATE INDEX ix1"
)
def test_drop_index(self):
impl = self._ix_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.drop_index(ix)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_table_opts(self):
impl = self._simple_fixture(table_kwargs={"mysql_engine": "InnoDB"})
self._assert_impl(impl, ddl_contains="ENGINE=InnoDB", dialect="mysql")
def test_drop_pk(self):
impl = self._pk_fixture()
pk = self.op.schema_obj.primary_key_constraint("mypk", "tname", ["id"])
impl.drop_constraint(pk)
new_table = self._assert_impl(impl)
assert not new_table.c.id.primary_key
assert not len(new_table.primary_key)
class BatchAPITest(TestBase):
@contextmanager
def _fixture(self, schema=None):
migration_context = mock.Mock(
opts={},
impl=mock.MagicMock(__dialect__="sqlite", connection=object()),
)
op = Operations(migration_context)
batch = op.batch_alter_table(
"tname", recreate="never", schema=schema
).__enter__()
mock_schema = mock.MagicMock()
with mock.patch("alembic.operations.schemaobj.sa_schema", mock_schema):
yield batch
batch.impl.flush()
self.mock_schema = mock_schema
def test_drop_col(self):
with self._fixture() as batch:
batch.drop_column("q")
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.drop_column(
"tname", self.mock_schema.Column(), schema=None
)
],
)
def test_add_col(self):
column = Column("w", String(50))
with self._fixture() as batch:
batch.add_column(column)
assert (
mock.call.add_column("tname", column, schema=None)
in batch.impl.operations.impl.mock_calls
)
def test_create_fk(self):
with self._fixture() as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_fk_schema(self):
with self._fixture(schema="foo") as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema="foo",
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_uq(self):
with self._fixture() as batch:
batch.create_unique_constraint("uq1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.UniqueConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="uq1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.UniqueConstraint())],
)
def test_create_pk(self):
with self._fixture() as batch:
batch.create_primary_key("pk1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.PrimaryKeyConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="pk1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.PrimaryKeyConstraint()
)
],
)
def test_create_check(self):
expr = text("a > b")
with self._fixture() as batch:
batch.create_check_constraint("ck1", expr)
eq_(
self.mock_schema.CheckConstraint.mock_calls,
[mock.call(expr, name="ck1")],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.CheckConstraint())],
)
def test_drop_constraint(self):
with self._fixture() as batch:
batch.drop_constraint("uq1")
eq_(self.mock_schema.Constraint.mock_calls, [mock.call(name="uq1")])
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.drop_constraint(self.mock_schema.Constraint())],
)
class CopyFromTest(TestBase):
def _fixture(self):
self.metadata = MetaData()
self.table = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
)
context = op_fixture(dialect="sqlite", as_sql=True)
self.op = Operations(context)
return context
def test_change_type(self):
context = self._fixture()
self.table.append_column(Column("toj", Text))
self.table.append_column(Column("fromj", JSON))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.alter_column("toj", type_=JSON)
batch_op.alter_column("fromj", type_=Text)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, toj JSON, fromj TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, toj, fromj) "
"SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x, foo.toj, "
"CAST(foo.fromj AS TEXT) AS %s FROM foo"
% (
("data" if sqla_14 else "anon_1"),
("fromj" if sqla_14 else "anon_2"),
),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_from_schematype(self):
context = self._fixture()
self.table.append_column(
Column("y", Boolean(create_constraint=True, name="ck1"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS INTEGER) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_name_from_existing_variant_type(self):
"""test #982"""
context = self._fixture()
self.table.append_column(
Column("y", Text().with_variant(Text(10000), "mysql"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
column_name="y",
new_column_name="q",
existing_type=Text().with_variant(Text(10000), "mysql"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, q TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, q) "
"SELECT foo.id, foo.data, foo.x, foo.y FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_to_schematype(self):
context = self._fixture()
self.table.append_column(Column("y", Integer))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
existing_type=Integer,
type_=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y BOOLEAN, PRIMARY KEY (id), "
"CONSTRAINT ck1 CHECK (y IN (0, 1)))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS BOOLEAN) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_w_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), "
"x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_wo_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_("CREATE UNIQUE INDEX ix_data ON foo (data)")
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_("DROP INDEX ix_data")
def test_create_drop_index_w_other_ops(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x FROM foo"
% (("data" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
batch_op.alter_column("data", type_=String)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
class BatchRoundTripTest(TestBase):
__only_on__ = "sqlite"
def setUp(self):
self.conn = config.db.connect()
self.metadata = MetaData()
t1 = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
mysql_engine="InnoDB",
)
with self.conn.begin():
t1.create(self.conn)
self.conn.execute(
t1.insert(),
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
],
)
context = MigrationContext.configure(self.conn)
self.op = Operations(context)
def tearDown(self):
# why commit? because SQLite has inconsistent treatment
# of transactional DDL. A test that runs CREATE TABLE and then
# ALTER TABLE to change the name of that table, will end up
# committing the CREATE TABLE but not the ALTER. As batch mode
# does this with a temp table name that's not even in the
# metadata collection, we don't have an explicit drop for it
# (though we could do that too). calling commit means the
# ALTER will go through and the drop_all() will then catch it.
_safe_commit_connection_transaction(self.conn)
with self.conn.begin():
self.metadata.drop_all(self.conn)
self.conn.close()
@contextmanager
def _sqlite_referential_integrity(self):
self.conn.exec_driver_sql("PRAGMA foreign_keys=ON")
try:
yield
finally:
self.conn.exec_driver_sql("PRAGMA foreign_keys=OFF")
# as these tests are typically intentional fails, clean out
# tables left over
m = MetaData()
m.reflect(self.conn)
with self.conn.begin():
m.drop_all(self.conn)
def _no_pk_fixture(self):
with self.conn.begin():
nopk = Table(
"nopk",
self.metadata,
Column("a", Integer),
Column("b", Integer),
Column("c", Integer),
mysql_engine="InnoDB",
)
nopk.create(self.conn)
self.conn.execute(
nopk.insert(),
[{"a": 1, "b": 2, "c": 3}, {"a": 2, "b": 4, "c": 5}],
)
return nopk
def _table_w_index_fixture(self):
with self.conn.begin():
t = Table(
"t_w_ix",
self.metadata,
Column("id", Integer, primary_key=True),
Column("thing", Integer),
Column("data", String(20)),
)
Index("ix_thing", t.c.thing)
t.create(self.conn)
return t
def _boolean_fixture(self):
with self.conn.begin():
t = Table(
"hasbool",
self.metadata,
Column("x", Boolean(create_constraint=True, name="ck1")),
Column("y", Integer),
)
t.create(self.conn)
def _timestamp_fixture(self):
with self.conn.begin():
t = Table("hasts", self.metadata, Column("x", DateTime()))
t.create(self.conn)
return t
def _ck_constraint_fixture(self):
with self.conn.begin():
t = Table(
"ck_table",
self.metadata,
Column("id", Integer, nullable=False),
CheckConstraint("id is not NULL", name="ck"),
)
t.create(self.conn)
return t
def _datetime_server_default_fixture(self):
return func.datetime("now", "localtime")
def _timestamp_w_expr_default_fixture(self):
with self.conn.begin():
t = Table(
"hasts",
self.metadata,
Column(
"x",
DateTime(),
server_default=self._datetime_server_default_fixture(),
nullable=False,
),
)
t.create(self.conn)
return t
def _int_to_boolean_fixture(self):
with self.conn.begin():
t = Table("hasbool", self.metadata, Column("x", Integer))
t.create(self.conn)
def test_add_constraint_type(self):
"""test for #1195."""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("q", Boolean(create_constraint=True)))
insp = inspect(self.conn)
assert {
c["type"]._type_affinity
for c in insp.get_columns("foo")
if c["name"] == "q"
}.intersection([Boolean, Integer])
def test_change_type_boolean_to_int(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def test_no_net_change_timestamp(self):
t = self._timestamp_fixture()
import datetime
with self.conn.begin():
self.conn.execute(
t.insert(), {"x": datetime.datetime(2012, 5, 18, 15, 32, 5)}
)
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column("x", type_=DateTime())
eq_(
self.conn.execute(_select(t.c.x)).fetchall(),
[(datetime.datetime(2012, 5, 18, 15, 32, 5),)],
)
def test_no_net_change_timestamp_w_default(self):
t = self._timestamp_w_expr_default_fixture()
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column(
"x",
type_=DateTime(),
nullable=False,
server_default=self._datetime_server_default_fixture(),
)
with self.conn.begin():
self.conn.execute(t.insert())
res = self.conn.execute(_select(t.c.x))
if sqla_14:
assert res.scalar_one_or_none() is not None
else:
row = res.fetchone()
assert row["x"] is not None
def test_drop_col_schematype(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.drop_column(
"x", existing_type=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
assert "x" not in (c["name"] for c in insp.get_columns("hasbool"))
def test_change_type_int_to_boolean(self):
self._int_to_boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x", type_=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
if exclusions.against(config, "sqlite"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Boolean],
)
elif exclusions.against(config, "mysql"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def _assert_data(self, data, tablename="foo"):
res = self.conn.execute(text("select * from %s" % tablename))
if sqla_14:
res = res.mappings()
eq_([dict(row) for row in res], data)
def test_ix_existing(self):
self._table_w_index_fixture()
with self.op.batch_alter_table("t_w_ix") as batch_op:
batch_op.alter_column("data", type_=String(30))
batch_op.create_index("ix_data", ["data"])
insp = inspect(self.conn)
eq_(
{
(ix["name"], tuple(ix["column_names"]))
for ix in insp.get_indexes("t_w_ix")
},
{("ix_data", ("data",)), ("ix_thing", ("thing",))},
)
def test_fk_points_to_me_auto(self):
self._test_fk_points_to_me("auto")
# in particular, this tests that the failures
# on PG and MySQL result in recovery of the batch system,
# e.g. that the _alembic_tmp_temp table is dropped
@config.requirements.no_referential_integrity
def test_fk_points_to_me_recreate(self):
self._test_fk_points_to_me("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_fk_points_to_me_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_fk_points_to_me("auto")
def _test_fk_points_to_me(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("foo", ["id"], ["foo_id"])],
)
def test_selfref_fk_auto(self):
self._test_selfref_fk("auto")
@config.requirements.no_referential_integrity
def test_selfref_fk_recreate(self):
self._test_selfref_fk("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_selfref_fk_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_selfref_fk("auto")
def _test_selfref_fk(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("bar_id", Integer, ForeignKey("bar.id")),
Column("data", String(50)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(
bar.insert(), {"id": 1, "data": "x", "bar_id": None}
)
self.conn.execute(
bar.insert(), {"id": 2, "data": "y", "bar_id": 1}
)
with self.op.batch_alter_table("bar", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("bar", ["id"], ["bar_id"])],
)
def test_change_type(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("data", type_=Integer)
self._assert_data(
[
{"id": 1, "data": 0, "x": 5},
{"id": 2, "data": 22, "x": 6},
{"id": 3, "data": 8, "x": 7},
{"id": 4, "data": 9, "x": 8},
{"id": 5, "data": 0, "x": 9},
]
)
def test_drop_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def test_drop_pk_col_readd_col(self):
# drop a column, add it back without primary_key=True, should no
# longer be in the constraint
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], [])
def test_drop_pk_col_readd_pk_col(self):
# drop a column, add it back with primary_key=True, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer, primary_key=True))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
def test_drop_pk_col_readd_col_also_pk_const(self):
# drop a column, add it back without primary_key=True, but then
# also make anew PK constraint that includes it, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
batch_op.create_primary_key("newpk", ["id"])
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_pk_constraint(self, recreate):
self._no_pk_fixture()
with self.op.batch_alter_table("nopk", recreate=recreate) as batch_op:
batch_op.create_primary_key("newpk", ["a", "b"])
pk_const = inspect(self.conn).get_pk_constraint("nopk")
with config.requirements.reflects_pk_names.fail_if():
eq_(pk_const["name"], "newpk")
eq_(pk_const["constrained_columns"], ["a", "b"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_add_ck_constraint(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_check_constraint("newck", text("x > 0"))
ck_consts = inspect(self.conn).get_check_constraints("foo")
ck_consts[0]["sqltext"] = re.sub(
r"[\'\"`\(\)]", "", ck_consts[0]["sqltext"]
)
for ck in ck_consts:
ck.pop("comment", None)
eq_(ck_consts, [{"sqltext": "x > 0", "name": "newck"}])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint(self, recreate):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate=recreate
) as batch_op:
batch_op.drop_constraint("ck", type_="check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint_legacy_type(self):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate="always"
) as batch_op:
# matches the docs that were written for this originally
batch_op.drop_constraint("ck", "check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.unnamed_constraints
def test_drop_foreign_key(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
naming_convention = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s"
}
with self.op.batch_alter_table(
"bar", naming_convention=naming_convention
) as batch_op:
batch_op.drop_constraint("fk_bar_foo_id_foo", type_="foreignkey")
eq_(inspect(self.conn).get_foreign_keys("bar"), [])
def test_drop_column_fk_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def _assert_table_comment(self, tname, comment):
insp = inspect(self.conn)
tcomment = insp.get_table_comment(tname)
eq_(tcomment, {"text": comment})
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_unique_constraint("newuk", ["x"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x"]}],
)
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq_plus_col(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.add_column(Column("y", Integer))
batch_op.create_unique_constraint("newuk", ["x", "y"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x", "y"]}],
)
@config.requirements.comments
def test_add_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
self._assert_table_comment("foo", "some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment(
"some new comment", existing_comment="some comment"
)
self._assert_table_comment("foo", "some new comment")
@config.requirements.comments
def test_drop_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_table_comment(existing_comment="some comment")
self._assert_table_comment("foo", None)
def _assert_column_comment(self, tname, cname, comment):
insp = inspect(self.conn)
cols = {col["name"]: col for col in insp.get_columns(tname)}
eq_(cols[cname]["comment"], comment)
@config.requirements.comments
def test_add_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_add_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_alter_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column(
"x", existing_type=Integer(), comment="some comment"
)
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
@config.requirements.comments
def test_alter_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.alter_column("x", comment="some comment")
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
def test_rename_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("x", new_column_name="y")
self._assert_data(
[
{"id": 1, "data": "d1", "y": 5},
{"id": 2, "data": "22", "y": 6},
{"id": 3, "data": "8.5", "y": 7},
{"id": 4, "data": "9.46", "y": 8},
{"id": 5, "data": "d5", "y": 9},
]
)
def test_rename_column_boolean(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar") as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
# @config.requirements.check_constraint_reflection
def test_rename_column_boolean_named_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True, name="ck1")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar", recreate="always") as batch_op:
batch_op.alter_column(
"flag",
new_column_name="bflag",
existing_type=Boolean(create_constraint=True, name="ck1"),
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
@config.requirements.non_native_boolean
def test_rename_column_non_native_boolean_no_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
self.conn.execute(
# override Boolean type which as of 1.1 coerces numerics
# to 1/0
text("insert into bar (id, flag) values (:id, :flag)"),
{"id": 3, "flag": 5},
)
with self.op.batch_alter_table(
"bar",
reflect_args=[Column("flag", Boolean(create_constraint=False))],
) as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[
{"id": 1, "bflag": True},
{"id": 2, "bflag": False},
{"id": 3, "bflag": 5},
],
"bar",
)
def test_drop_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
self._assert_data(
[
{"data": "d1", "x": 5},
{"data": "22", "x": 6},
{"data": "8.5", "x": 7},
{"data": "9.46", "x": 8},
{"data": "d5", "x": 9},
]
)
def test_rename_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("id", new_column_name="ident")
self._assert_data(
[
{"ident": 1, "data": "d1", "x": 5},
{"ident": 2, "data": "22", "x": 6},
{"ident": 3, "data": "8.5", "x": 7},
{"ident": 4, "data": "9.46", "x": 8},
{"ident": 5, "data": "d5", "x": 9},
]
)
def test_add_column_auto(self):
# note this uses ALTER
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(config.db).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_add_column_auto_server_default_calculated(self):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2",
DateTime(),
server_default=self._datetime_server_default_fixture(),
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": mock.ANY},
{"id": 2, "data": "22", "x": 6, "data2": mock.ANY},
{"id": 3, "data": "8.5", "x": 7, "data2": mock.ANY},
{"id": 4, "data": "9.46", "x": 8, "data2": mock.ANY},
{"id": 5, "data": "d5", "x": 9, "data2": mock.ANY},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@testing.combinations((True,), (False,))
@testing.exclusions.only_on("sqlite")
@config.requirements.computed_columns
def test_add_column_auto_generated(self, persisted):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2", Integer, Computed("1 + 1", persisted=persisted)
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": 2},
{"id": 2, "data": "22", "x": 6, "data2": 2},
{"id": 3, "data": "8.5", "x": 7, "data2": 2},
{"id": 4, "data": "9.46", "x": 8, "data2": 2},
{"id": 5, "data": "d5", "x": 9, "data2": 2},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@config.requirements.identity_columns
def test_add_column_auto_identity(self):
"""test #883"""
self._no_pk_fixture()
with self.op.batch_alter_table("nopk") as batch_op:
batch_op.add_column(Column("id", Integer, Identity()))
self._assert_data(
[
{"a": 1, "b": 2, "c": 3, "id": 1},
{"a": 2, "b": 4, "c": 5, "id": 2},
],
tablename="nopk",
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x"],
)
def test_add_column_insert_before_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data2", "data", "x"],
)
def test_add_column_insert_after_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_after="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "data2", "x"],
)
def test_add_column_insert_before_raise_on_alter(self):
def go():
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
assert_raises_message(
alembic_exc.CommandError,
"Can't specify insert_before or insert_after when using ALTER",
go,
)
def test_add_column_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_create_drop_index(self):
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
insp = inspect(self.conn)
eq_(
[
dict(
unique=ix["unique"],
name=ix["name"],
column_names=ix["column_names"],
)
for ix in insp.get_indexes("foo")
],
[{"unique": True, "name": "ix_data", "column_names": ["data"]}],
)
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_index("ix_data")
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
class BatchRoundTripMySQLTest(BatchRoundTripTest):
__only_on__ = "mysql", "mariadb"
__backend__ = True
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_rename_column_pk(self):
super().test_rename_column_pk()
@exclusions.fails()
def test_rename_column(self):
super().test_rename_column()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
# fails on mariadb 10.2, succeeds on 10.3
@exclusions.fails_if(config.requirements.mysql_check_col_name_change)
def test_rename_column_boolean(self):
super().test_rename_column_boolean()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
class BatchRoundTripPostgresqlTest(BatchRoundTripTest):
__only_on__ = "postgresql"
__backend__ = True
def _native_boolean_fixture(self):
t = Table(
"has_native_bool",
self.metadata,
Column(
"x",
Boolean(create_constraint=True),
server_default="false",
nullable=False,
),
Column("y", Integer),
)
with self.conn.begin():
t.create(self.conn)
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
@exclusions.fails()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
@exclusions.fails()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_add_col_table_has_native_boolean(self):
self._native_boolean_fixture()
# to ensure test coverage on SQLAlchemy 1.4 and above,
# force the create_constraint flag to True even though it
# defaults to false in 1.4. this test wants to ensure that the
# "should create" rule is consulted
def listen_for_reflect(inspector, table, column_info):
if isinstance(column_info["type"], Boolean):
column_info["type"].create_constraint = True
with self.op.batch_alter_table(
"has_native_bool",
recreate="always",
reflect_kwargs={
"listeners": [("column_reflect", listen_for_reflect)]
},
) as batch_op:
batch_op.add_column(Column("data", Integer))
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "data"
],
[Integer],
)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "x"
],
[Boolean],
)
class OfflineTest(TestBase):
@testing.fixture
def no_reflect_batch_fixture(self):
staging_env()
def go():
self.cfg = cfg = _no_sql_testing_config(dialect="sqlite")
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String, Table, MetaData
some_table_up = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('bar', String)
)
some_table_down = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('foo', Integer)
)
def upgrade():
with op.batch_alter_table("some_table", copy_from=some_table_up) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table", copy_from=some_table_down) as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
""" # noqa: E501
% a,
)
yield go
clear_staging_env()
@testing.fixture
def batch_fixture(self):
staging_env()
def go(dialect):
self.cfg = cfg = _no_sql_testing_config(dialect=dialect)
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String
def upgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
"""
% a,
)
yield go
clear_staging_env()
def test_upgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"ALTER TABLE some_table ADD COLUMN foo INTEGER", buf.getvalue()
)
def test_downgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"ALTER TABLE some_table DROP COLUMN foo", buf.getvalue()
)
def test_upgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.upgrade(self.cfg, self.a, sql=True)
def test_downgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
def test_upgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
def test_downgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
| from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import JSON
from sqlalchemy import MetaData
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import Text
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects import sqlite as sqlite_dialect
from sqlalchemy.schema import CreateIndex
from sqlalchemy.schema import CreateTable
from sqlalchemy.sql import column
from sqlalchemy.sql import text
from alembic import command
from alembic import testing
from alembic import util
from alembic.ddl import sqlite
from alembic.operations import Operations
from alembic.operations.batch import ApplyBatchImpl
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import exclusions
from alembic.testing import expect_raises_message
from alembic.testing import is_
from alembic.testing import mock
from alembic.testing import TestBase
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import op_fixture
from alembic.util import CommandError
from alembic.util import exc as alembic_exc
from alembic.util.sqla_compat import _NONE_NAME
from alembic.util.sqla_compat import _safe_commit_connection_transaction
from alembic.util.sqla_compat import _select
from alembic.util.sqla_compat import has_computed
from alembic.util.sqla_compat import has_identity
from alembic.util.sqla_compat import sqla_14
if has_computed:
from alembic.util.sqla_compat import Computed
if has_identity:
from alembic.util.sqla_compat import Identity
class BatchApplyTest(TestBase):
def setUp(self):
self.op = Operations(mock.Mock(opts={}))
self.impl = sqlite.SQLiteImpl(
sqlite_dialect.dialect(), None, False, False, None, {}
)
def _simple_fixture(self, table_args=(), table_kwargs={}, **kw):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String(10)),
Column("y", Integer),
)
return ApplyBatchImpl(
self.impl, t, table_args, table_kwargs, False, **kw
)
def _uq_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
UniqueConstraint("y", name="uq1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_table_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
CheckConstraint("y > 5", name="ck1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_col_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer, CheckConstraint("y > 5", name="ck1")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _ix_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
Index("ix1", "y"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _pk_fixture(self):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer),
Column("x", String()),
Column("y", Integer),
PrimaryKeyConstraint("id", name="mypk"),
)
return ApplyBatchImpl(self.impl, t, (), {}, False)
def _literal_ck_fixture(
self, copy_from=None, table_args=(), table_kwargs={}
):
m = MetaData()
if copy_from is not None:
t = copy_from
else:
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
CheckConstraint("email LIKE '%@%'"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _sql_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
)
t.append_constraint(CheckConstraint(t.c.email.like("%@%")))
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _multi_fk_fixture(self, table_args=(), table_kwargs={}, schema=None):
m = MetaData()
if schema:
schemaarg = "%s." % schema
else:
schemaarg = ""
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id_1", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_2", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_3", Integer),
Column("user_id_version", Integer),
ForeignKeyConstraint(
["user_id_3", "user_id_version"],
["%suser.id" % schemaarg, "%suser.id_version" % schemaarg],
),
schema=schema,
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id", name="ufk")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _selfref_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("parent_id", Integer, ForeignKey("tname.id")),
Column("data", String),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_no_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _enum_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", Enum("a", "b", "c", create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _server_default_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", String(), server_default=""),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _assert_impl(
self,
impl,
colnames=None,
ddl_contains=None,
ddl_not_contains=None,
dialect="default",
schema=None,
):
context = op_fixture(dialect=dialect)
impl._create(context.impl)
if colnames is None:
colnames = ["id", "x", "y"]
eq_(impl.new_table.c.keys(), colnames)
pk_cols = [col for col in impl.new_table.c if col.primary_key]
eq_(list(impl.new_table.primary_key), pk_cols)
create_stmt = str(
CreateTable(impl.new_table).compile(dialect=context.dialect)
)
create_stmt = re.sub(r"[\n\t]", "", create_stmt)
idx_stmt = ""
# create indexes; these should be created in terms of the
# final table name
impl.new_table.name = impl.table.name
for idx in impl._gather_indexes_from_both_tables():
idx_stmt += str(CreateIndex(idx).compile(dialect=context.dialect))
idx_stmt = re.sub(r"[\n\t]", "", idx_stmt)
# revert new table name to the temp name, assertions below
# are looking for the temp name
impl.new_table.name = ApplyBatchImpl._calc_temp_name(impl.table.name)
if ddl_contains:
assert ddl_contains in create_stmt + idx_stmt
if ddl_not_contains:
assert ddl_not_contains not in create_stmt + idx_stmt
expected = [create_stmt]
if schema:
args = {"schema": "%s." % schema}
else:
args = {"schema": ""}
args["temp_name"] = impl.new_table.name
args["colnames"] = ", ".join(
[
impl.new_table.c[name].name
for name in colnames
if name in impl.table.c
]
)
args["tname_colnames"] = ", ".join(
"CAST(%(schema)stname.%(name)s AS %(type)s) AS %(cast_label)s"
% {
"schema": args["schema"],
"name": name,
"type": impl.new_table.c[name].type,
"cast_label": name if sqla_14 else "anon_1",
}
if (
impl.new_table.c[name].type._type_affinity
is not impl.table.c[name].type._type_affinity
)
else "%(schema)stname.%(name)s"
% {"schema": args["schema"], "name": name}
for name in colnames
if name in impl.table.c
)
expected.extend(
[
"INSERT INTO %(schema)s%(temp_name)s (%(colnames)s) "
"SELECT %(tname_colnames)s FROM %(schema)stname" % args,
"DROP TABLE %(schema)stname" % args,
"ALTER TABLE %(schema)s%(temp_name)s "
"RENAME TO %(schema)stname" % args,
]
)
if idx_stmt:
expected.append(idx_stmt)
context.assert_(*expected)
return impl.new_table
def test_change_type(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", type_=String)
new_table = self._assert_impl(impl)
assert new_table.c.x.type._type_affinity is String
def test_rename_col(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", name="q")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.name, "q")
def test_rename_col_w_index(self):
impl = self._ix_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(
impl, ddl_contains="CREATE INDEX ix1 ON tname (y2)"
)
eq_(new_table.c.y.name, "y2")
def test_rename_col_w_uq(self):
impl = self._uq_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(impl, ddl_contains="UNIQUE (y2)")
eq_(new_table.c.y.name, "y2")
def test_alter_column_comment(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", comment="some comment")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.comment, "some comment")
def test_add_column_comment(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("q", Integer, comment="some comment"))
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "q"])
eq_(new_table.c.q.comment, "some comment")
def test_rename_col_boolean(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (bflag IN (0, 1)",
colnames=["id", "flag"],
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_change_type_schematype_to_non(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", type_=Integer)
new_table = self._assert_impl(
impl, colnames=["id", "flag"], ddl_not_contains="CHECK"
)
assert new_table.c.flag.type._type_affinity is Integer
# NOTE: we can't do test_change_type_non_to_schematype
# at this level because the "add_constraint" part of this
# comes from toimpl.py, which we aren't testing here
def test_rename_col_boolean_no_ck(self):
impl = self._boolean_no_ck_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl, ddl_not_contains="CHECK", colnames=["id", "flag"]
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
0,
)
def test_rename_col_enum(self):
impl = self._enum_fixture()
impl.alter_column("tname", "thing", name="thang")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (thang IN ('a', 'b', 'c')",
colnames=["id", "thing"],
)
eq_(new_table.c.thing.name, "thang")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_rename_col_literal_ck(self):
impl = self._literal_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
# note this is wrong, we don't dig into the SQL
impl,
ddl_contains="CHECK (email LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_literal_ck_workaround(self):
impl = self._literal_ck_fixture(
copy_from=Table(
"tname",
MetaData(),
Column("id", Integer, primary_key=True),
Column("email", String),
),
table_args=[CheckConstraint("emol LIKE '%@%'")],
)
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_sql_ck(self):
impl = self._sql_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_add_col(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("x", "id", "y")])
new_table = self._assert_impl(impl, colnames=["x", "id", "y"])
eq_(new_table.c.x.name, "x")
def test_add_col_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("id", "x", "g", "y")])
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col, insert_before="x")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_beginning(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="id")
new_table = self._assert_impl(impl, colnames=["g", "id", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_penultimate(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="x")
self._assert_impl(impl, colnames=["id", "x", "g", "y"])
def test_add_col_insert_after_end(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_plus_no_order(self):
impl = self._simple_fixture()
# operations.add_column produces a table
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer))
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_no_order_plus_insert_after(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", Column("q", Integer))
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_after="g")
new_table = self._assert_impl(
impl, colnames=["id", "g", "q", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_before="g")
new_table = self._assert_impl(
impl, colnames=["id", "q", "g", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_server_default(self):
impl = self._simple_fixture()
impl.alter_column("tname", "y", server_default="10")
new_table = self._assert_impl(impl, ddl_contains="DEFAULT '10'")
eq_(new_table.c.y.server_default.arg, "10")
def test_drop_server_default(self):
impl = self._server_default_fixture()
impl.alter_column("tname", "thing", server_default=None)
new_table = self._assert_impl(
impl, colnames=["id", "thing"], ddl_not_contains="DEFAULT"
)
eq_(new_table.c.thing.server_default, None)
def test_rename_col_pk(self):
impl = self._simple_fixture()
impl.alter_column("tname", "id", name="foobar")
new_table = self._assert_impl(
impl, ddl_contains="PRIMARY KEY (foobar)"
)
eq_(new_table.c.id.name, "foobar")
eq_(list(new_table.primary_key), [new_table.c.id])
def test_rename_col_fk(self):
impl = self._fk_fixture()
impl.alter_column("tname", "user_id", name="foobar")
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_contains='FOREIGN KEY(foobar) REFERENCES "user" (id)',
)
eq_(new_table.c.user_id.name, "foobar")
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_regen_multi_fk(self):
impl = self._multi_fk_fixture()
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES "user" (id, id_version)',
)
def test_regen_multi_fk_schema(self):
impl = self._multi_fk_fixture(schema="foo_schema")
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES foo_schema."user" (id, id_version)',
schema="foo_schema",
)
def test_do_not_add_existing_columns_columns(self):
impl = self._multi_fk_fixture()
meta = impl.table.metadata
cid = Column("id", Integer())
user = Table("user", meta, cid)
fk = [
c
for c in impl.unnamed_constraints
if isinstance(c, ForeignKeyConstraint)
]
impl._setup_referent(meta, fk[0])
is_(user.c.id, cid)
def test_drop_col(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("x"))
new_table = self._assert_impl(impl, colnames=["id", "y"])
assert "y" in new_table.c
assert "x" not in new_table.c
def test_drop_col_remove_pk(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("id"))
new_table = self._assert_impl(
impl, colnames=["x", "y"], ddl_not_contains="PRIMARY KEY"
)
assert "y" in new_table.c
assert "id" not in new_table.c
assert not new_table.primary_key
def test_drop_col_remove_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("user_id"))
new_table = self._assert_impl(
impl, colnames=["id", "email"], ddl_not_contains="FOREIGN KEY"
)
assert "user_id" not in new_table.c
assert not new_table.foreign_keys
def test_drop_col_retain_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("email"))
new_table = self._assert_impl(
impl,
colnames=["id", "user_id"],
ddl_contains='FOREIGN KEY(user_id) REFERENCES "user" (id)',
)
assert "email" not in new_table.c
assert new_table.c.user_id.foreign_keys
def test_drop_col_retain_fk_selfref(self):
impl = self._selfref_fk_fixture()
impl.drop_column("tname", column("data"))
new_table = self._assert_impl(impl, colnames=["id", "parent_id"])
assert "data" not in new_table.c
assert new_table.c.parent_id.foreign_keys
def test_add_fk(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("user_id", Integer))
fk = self.op.schema_obj.foreign_key_constraint(
"fk1", "tname", "user", ["user_id"], ["id"]
)
impl.add_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "x", "y", "user_id"],
ddl_contains="CONSTRAINT fk1 FOREIGN KEY(user_id) "
'REFERENCES "user" (id)',
)
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_drop_fk(self):
impl = self._named_fk_fixture()
fk = ForeignKeyConstraint([], [], name="ufk")
impl.drop_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_not_contains="CONSTRAINT ufk",
)
eq_(list(new_table.foreign_keys), [])
def test_add_uq(self):
impl = self._simple_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.add_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT uq1 UNIQUE",
)
def test_drop_uq(self):
impl = self._uq_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.drop_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_ck_unnamed(self):
"""test for #1195"""
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint(_NONE_NAME, "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CHECK (y > 5)",
)
def test_add_ck(self):
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_table(self):
impl = self._named_ck_table_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_col(self):
impl = self._named_ck_col_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_create_index(self):
impl = self._simple_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.create_index(ix)
self._assert_impl(
impl, colnames=["id", "x", "y"], ddl_contains="CREATE INDEX ix1"
)
def test_drop_index(self):
impl = self._ix_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.drop_index(ix)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_table_opts(self):
impl = self._simple_fixture(table_kwargs={"mysql_engine": "InnoDB"})
self._assert_impl(impl, ddl_contains="ENGINE=InnoDB", dialect="mysql")
def test_drop_pk(self):
impl = self._pk_fixture()
pk = self.op.schema_obj.primary_key_constraint("mypk", "tname", ["id"])
impl.drop_constraint(pk)
new_table = self._assert_impl(impl)
assert not new_table.c.id.primary_key
assert not len(new_table.primary_key)
class BatchAPITest(TestBase):
@contextmanager
def _fixture(self, schema=None):
migration_context = mock.Mock(
opts={},
impl=mock.MagicMock(__dialect__="sqlite", connection=object()),
)
op = Operations(migration_context)
batch = op.batch_alter_table(
"tname", recreate="never", schema=schema
).__enter__()
mock_schema = mock.MagicMock()
with mock.patch("alembic.operations.schemaobj.sa_schema", mock_schema):
yield batch
batch.impl.flush()
self.mock_schema = mock_schema
def test_drop_col(self):
with self._fixture() as batch:
batch.drop_column("q")
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.drop_column(
"tname", self.mock_schema.Column(), schema=None
)
],
)
def test_add_col(self):
column = Column("w", String(50))
with self._fixture() as batch:
batch.add_column(column)
assert (
mock.call.add_column("tname", column, schema=None)
in batch.impl.operations.impl.mock_calls
)
def test_create_fk(self):
with self._fixture() as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_fk_schema(self):
with self._fixture(schema="foo") as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema="foo",
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_uq(self):
with self._fixture() as batch:
batch.create_unique_constraint("uq1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.UniqueConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="uq1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.UniqueConstraint())],
)
def test_create_pk(self):
with self._fixture() as batch:
batch.create_primary_key("pk1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.PrimaryKeyConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="pk1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.PrimaryKeyConstraint()
)
],
)
def test_create_check(self):
expr = text("a > b")
with self._fixture() as batch:
batch.create_check_constraint("ck1", expr)
eq_(
self.mock_schema.CheckConstraint.mock_calls,
[mock.call(expr, name="ck1")],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.CheckConstraint())],
)
def test_drop_constraint(self):
with self._fixture() as batch:
batch.drop_constraint("uq1")
eq_(self.mock_schema.Constraint.mock_calls, [mock.call(name="uq1")])
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.drop_constraint(self.mock_schema.Constraint())],
)
class CopyFromTest(TestBase):
def _fixture(self):
self.metadata = MetaData()
self.table = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
)
context = op_fixture(dialect="sqlite", as_sql=True)
self.op = Operations(context)
return context
def test_change_type(self):
context = self._fixture()
self.table.append_column(Column("toj", Text))
self.table.append_column(Column("fromj", JSON))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.alter_column("toj", type_=JSON)
batch_op.alter_column("fromj", type_=Text)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, toj JSON, fromj TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, toj, fromj) "
"SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x, foo.toj, "
"CAST(foo.fromj AS TEXT) AS %s FROM foo"
% (
("data" if sqla_14 else "anon_1"),
("fromj" if sqla_14 else "anon_2"),
),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_from_schematype(self):
context = self._fixture()
self.table.append_column(
Column("y", Boolean(create_constraint=True, name="ck1"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS INTEGER) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_name_from_existing_variant_type(self):
"""test #982"""
context = self._fixture()
self.table.append_column(
Column("y", Text().with_variant(Text(10000), "mysql"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
column_name="y",
new_column_name="q",
existing_type=Text().with_variant(Text(10000), "mysql"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, q TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, q) "
"SELECT foo.id, foo.data, foo.x, foo.y FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_to_schematype(self):
context = self._fixture()
self.table.append_column(Column("y", Integer))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
existing_type=Integer,
type_=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y BOOLEAN, PRIMARY KEY (id), "
"CONSTRAINT ck1 CHECK (y IN (0, 1)))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS BOOLEAN) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_w_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), "
"x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_wo_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_("CREATE UNIQUE INDEX ix_data ON foo (data)")
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_("DROP INDEX ix_data")
def test_create_drop_index_w_other_ops(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x FROM foo"
% (("data" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
batch_op.alter_column("data", type_=String)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
class BatchRoundTripTest(TestBase):
__only_on__ = "sqlite"
def setUp(self):
self.conn = config.db.connect()
self.metadata = MetaData()
t1 = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
mysql_engine="InnoDB",
)
with self.conn.begin():
t1.create(self.conn)
self.conn.execute(
t1.insert(),
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
],
)
context = MigrationContext.configure(self.conn)
self.op = Operations(context)
def tearDown(self):
# why commit? because SQLite has inconsistent treatment
# of transactional DDL. A test that runs CREATE TABLE and then
# ALTER TABLE to change the name of that table, will end up
# committing the CREATE TABLE but not the ALTER. As batch mode
# does this with a temp table name that's not even in the
# metadata collection, we don't have an explicit drop for it
# (though we could do that too). calling commit means the
# ALTER will go through and the drop_all() will then catch it.
_safe_commit_connection_transaction(self.conn)
with self.conn.begin():
self.metadata.drop_all(self.conn)
self.conn.close()
@contextmanager
def _sqlite_referential_integrity(self):
self.conn.exec_driver_sql("PRAGMA foreign_keys=ON")
try:
yield
finally:
self.conn.exec_driver_sql("PRAGMA foreign_keys=OFF")
# as these tests are typically intentional fails, clean out
# tables left over
m = MetaData()
m.reflect(self.conn)
with self.conn.begin():
m.drop_all(self.conn)
def _no_pk_fixture(self):
with self.conn.begin():
nopk = Table(
"nopk",
self.metadata,
Column("a", Integer),
Column("b", Integer),
Column("c", Integer),
mysql_engine="InnoDB",
)
nopk.create(self.conn)
self.conn.execute(
nopk.insert(),
[{"a": 1, "b": 2, "c": 3}, {"a": 2, "b": 4, "c": 5}],
)
return nopk
def _table_w_index_fixture(self):
with self.conn.begin():
t = Table(
"t_w_ix",
self.metadata,
Column("id", Integer, primary_key=True),
Column("thing", Integer),
Column("data", String(20)),
)
Index("ix_thing", t.c.thing)
t.create(self.conn)
return t
def _boolean_fixture(self):
with self.conn.begin():
t = Table(
"hasbool",
self.metadata,
Column("x", Boolean(create_constraint=True, name="ck1")),
Column("y", Integer),
)
t.create(self.conn)
def _timestamp_fixture(self):
with self.conn.begin():
t = Table("hasts", self.metadata, Column("x", DateTime()))
t.create(self.conn)
return t
def _ck_constraint_fixture(self):
with self.conn.begin():
t = Table(
"ck_table",
self.metadata,
Column("id", Integer, nullable=False),
CheckConstraint("id is not NULL", name="ck"),
)
t.create(self.conn)
return t
def _datetime_server_default_fixture(self):
return func.datetime("now", "localtime")
def _timestamp_w_expr_default_fixture(self):
with self.conn.begin():
t = Table(
"hasts",
self.metadata,
Column(
"x",
DateTime(),
server_default=self._datetime_server_default_fixture(),
nullable=False,
),
)
t.create(self.conn)
return t
def _int_to_boolean_fixture(self):
with self.conn.begin():
t = Table("hasbool", self.metadata, Column("x", Integer))
t.create(self.conn)
def test_add_constraint_type(self):
"""test for #1195."""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("q", Boolean(create_constraint=True)))
insp = inspect(self.conn)
assert {
c["type"]._type_affinity
for c in insp.get_columns("foo")
if c["name"] == "q"
}.intersection([Boolean, Integer])
def test_change_type_boolean_to_int(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def test_no_net_change_timestamp(self):
t = self._timestamp_fixture()
import datetime
with self.conn.begin():
self.conn.execute(
t.insert(), {"x": datetime.datetime(2012, 5, 18, 15, 32, 5)}
)
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column("x", type_=DateTime())
eq_(
self.conn.execute(_select(t.c.x)).fetchall(),
[(datetime.datetime(2012, 5, 18, 15, 32, 5),)],
)
def test_no_net_change_timestamp_w_default(self):
t = self._timestamp_w_expr_default_fixture()
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column(
"x",
type_=DateTime(),
nullable=False,
server_default=self._datetime_server_default_fixture(),
)
with self.conn.begin():
self.conn.execute(t.insert())
res = self.conn.execute(_select(t.c.x))
if sqla_14:
assert res.scalar_one_or_none() is not None
else:
row = res.fetchone()
assert row["x"] is not None
def test_drop_col_schematype(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.drop_column(
"x", existing_type=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
assert "x" not in (c["name"] for c in insp.get_columns("hasbool"))
def test_change_type_int_to_boolean(self):
self._int_to_boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x", type_=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
if exclusions.against(config, "sqlite"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Boolean],
)
elif exclusions.against(config, "mysql"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def _assert_data(self, data, tablename="foo"):
res = self.conn.execute(text("select * from %s" % tablename))
if sqla_14:
res = res.mappings()
eq_([dict(row) for row in res], data)
def test_ix_existing(self):
self._table_w_index_fixture()
with self.op.batch_alter_table("t_w_ix") as batch_op:
batch_op.alter_column("data", type_=String(30))
batch_op.create_index("ix_data", ["data"])
insp = inspect(self.conn)
eq_(
{
(ix["name"], tuple(ix["column_names"]))
for ix in insp.get_indexes("t_w_ix")
},
{("ix_data", ("data",)), ("ix_thing", ("thing",))},
)
def test_fk_points_to_me_auto(self):
self._test_fk_points_to_me("auto")
# in particular, this tests that the failures
# on PG and MySQL result in recovery of the batch system,
# e.g. that the _alembic_tmp_temp table is dropped
@config.requirements.no_referential_integrity
def test_fk_points_to_me_recreate(self):
self._test_fk_points_to_me("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_fk_points_to_me_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_fk_points_to_me("auto")
def _test_fk_points_to_me(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("foo", ["id"], ["foo_id"])],
)
def test_selfref_fk_auto(self):
self._test_selfref_fk("auto")
@config.requirements.no_referential_integrity
def test_selfref_fk_recreate(self):
self._test_selfref_fk("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_selfref_fk_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_selfref_fk("auto")
def _test_selfref_fk(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("bar_id", Integer, ForeignKey("bar.id")),
Column("data", String(50)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(
bar.insert(), {"id": 1, "data": "x", "bar_id": None}
)
self.conn.execute(
bar.insert(), {"id": 2, "data": "y", "bar_id": 1}
)
with self.op.batch_alter_table("bar", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("bar", ["id"], ["bar_id"])],
)
def test_change_type(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("data", type_=Integer)
self._assert_data(
[
{"id": 1, "data": 0, "x": 5},
{"id": 2, "data": 22, "x": 6},
{"id": 3, "data": 8, "x": 7},
{"id": 4, "data": 9, "x": 8},
{"id": 5, "data": 0, "x": 9},
]
)
def test_drop_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def test_drop_pk_col_readd_col(self):
# drop a column, add it back without primary_key=True, should no
# longer be in the constraint
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], [])
def test_drop_pk_col_readd_pk_col(self):
# drop a column, add it back with primary_key=True, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer, primary_key=True))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
def test_drop_pk_col_readd_col_also_pk_const(self):
# drop a column, add it back without primary_key=True, but then
# also make anew PK constraint that includes it, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
batch_op.create_primary_key("newpk", ["id"])
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_pk_constraint(self, recreate):
self._no_pk_fixture()
with self.op.batch_alter_table("nopk", recreate=recreate) as batch_op:
batch_op.create_primary_key("newpk", ["a", "b"])
pk_const = inspect(self.conn).get_pk_constraint("nopk")
with config.requirements.reflects_pk_names.fail_if():
eq_(pk_const["name"], "newpk")
eq_(pk_const["constrained_columns"], ["a", "b"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_add_ck_constraint(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_check_constraint("newck", text("x > 0"))
ck_consts = inspect(self.conn).get_check_constraints("foo")
ck_consts[0]["sqltext"] = re.sub(
r"[\'\"`\(\)]", "", ck_consts[0]["sqltext"]
)
for ck in ck_consts:
ck.pop("comment", None)
eq_(ck_consts, [{"sqltext": "x > 0", "name": "newck"}])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint(self, recreate):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate=recreate
) as batch_op:
batch_op.drop_constraint("ck", type_="check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint_legacy_type(self):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate="always"
) as batch_op:
# matches the docs that were written for this originally
batch_op.drop_constraint("ck", "check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.unnamed_constraints
def test_drop_foreign_key(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
naming_convention = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s"
}
with self.op.batch_alter_table(
"bar", naming_convention=naming_convention
) as batch_op:
batch_op.drop_constraint("fk_bar_foo_id_foo", type_="foreignkey")
eq_(inspect(self.conn).get_foreign_keys("bar"), [])
def test_drop_column_fk_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def _assert_table_comment(self, tname, comment):
insp = inspect(self.conn)
tcomment = insp.get_table_comment(tname)
eq_(tcomment, {"text": comment})
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_unique_constraint("newuk", ["x"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x"]}],
)
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq_plus_col(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.add_column(Column("y", Integer))
batch_op.create_unique_constraint("newuk", ["x", "y"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x", "y"]}],
)
@config.requirements.comments
def test_add_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
self._assert_table_comment("foo", "some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment(
"some new comment", existing_comment="some comment"
)
self._assert_table_comment("foo", "some new comment")
@config.requirements.comments
def test_drop_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_table_comment(existing_comment="some comment")
self._assert_table_comment("foo", None)
def _assert_column_comment(self, tname, cname, comment):
insp = inspect(self.conn)
cols = {col["name"]: col for col in insp.get_columns(tname)}
eq_(cols[cname]["comment"], comment)
@config.requirements.comments
def test_add_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_add_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_alter_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column(
"x", existing_type=Integer(), comment="some comment"
)
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
@config.requirements.comments
def test_alter_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.alter_column("x", comment="some comment")
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
def test_rename_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("x", new_column_name="y")
self._assert_data(
[
{"id": 1, "data": "d1", "y": 5},
{"id": 2, "data": "22", "y": 6},
{"id": 3, "data": "8.5", "y": 7},
{"id": 4, "data": "9.46", "y": 8},
{"id": 5, "data": "d5", "y": 9},
]
)
def test_rename_column_boolean(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar") as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
# @config.requirements.check_constraint_reflection
def test_rename_column_boolean_named_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True, name="ck1")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar", recreate="always") as batch_op:
batch_op.alter_column(
"flag",
new_column_name="bflag",
existing_type=Boolean(create_constraint=True, name="ck1"),
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
@config.requirements.non_native_boolean
def test_rename_column_non_native_boolean_no_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
self.conn.execute(
# override Boolean type which as of 1.1 coerces numerics
# to 1/0
text("insert into bar (id, flag) values (:id, :flag)"),
{"id": 3, "flag": 5},
)
with self.op.batch_alter_table(
"bar",
reflect_args=[Column("flag", Boolean(create_constraint=False))],
) as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[
{"id": 1, "bflag": True},
{"id": 2, "bflag": False},
{"id": 3, "bflag": 5},
],
"bar",
)
def test_drop_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
self._assert_data(
[
{"data": "d1", "x": 5},
{"data": "22", "x": 6},
{"data": "8.5", "x": 7},
{"data": "9.46", "x": 8},
{"data": "d5", "x": 9},
]
)
def test_rename_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("id", new_column_name="ident")
self._assert_data(
[
{"ident": 1, "data": "d1", "x": 5},
{"ident": 2, "data": "22", "x": 6},
{"ident": 3, "data": "8.5", "x": 7},
{"ident": 4, "data": "9.46", "x": 8},
{"ident": 5, "data": "d5", "x": 9},
]
)
def test_add_column_auto(self):
# note this uses ALTER
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(config.db).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_add_column_auto_server_default_calculated(self):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2",
DateTime(),
server_default=self._datetime_server_default_fixture(),
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": mock.ANY},
{"id": 2, "data": "22", "x": 6, "data2": mock.ANY},
{"id": 3, "data": "8.5", "x": 7, "data2": mock.ANY},
{"id": 4, "data": "9.46", "x": 8, "data2": mock.ANY},
{"id": 5, "data": "d5", "x": 9, "data2": mock.ANY},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@testing.combinations((True,), (False,))
@testing.exclusions.only_on("sqlite")
@config.requirements.computed_columns
def test_add_column_auto_generated(self, persisted):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2", Integer, Computed("1 + 1", persisted=persisted)
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": 2},
{"id": 2, "data": "22", "x": 6, "data2": 2},
{"id": 3, "data": "8.5", "x": 7, "data2": 2},
{"id": 4, "data": "9.46", "x": 8, "data2": 2},
{"id": 5, "data": "d5", "x": 9, "data2": 2},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@config.requirements.identity_columns
def test_add_column_auto_identity(self):
"""test #883"""
self._no_pk_fixture()
with self.op.batch_alter_table("nopk") as batch_op:
batch_op.add_column(Column("id", Integer, Identity()))
self._assert_data(
[
{"a": 1, "b": 2, "c": 3, "id": 1},
{"a": 2, "b": 4, "c": 5, "id": 2},
],
tablename="nopk",
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x"],
)
def test_add_column_insert_before_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data2", "data", "x"],
)
def test_add_column_insert_after_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_after="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "data2", "x"],
)
def test_add_column_insert_before_raise_on_alter(self):
def go():
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
assert_raises_message(
alembic_exc.CommandError,
"Can't specify insert_before or insert_after when using ALTER",
go,
)
def test_add_column_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_create_drop_index(self):
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
insp = inspect(self.conn)
eq_(
[
dict(
unique=ix["unique"],
name=ix["name"],
column_names=ix["column_names"],
)
for ix in insp.get_indexes("foo")
],
[{"unique": True, "name": "ix_data", "column_names": ["data"]}],
)
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_index("ix_data")
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
class BatchRoundTripMySQLTest(BatchRoundTripTest):
__only_on__ = "mysql", "mariadb"
__backend__ = True
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_rename_column_pk(self):
super().test_rename_column_pk()
@exclusions.fails()
def test_rename_column(self):
super().test_rename_column()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
# fails on mariadb 10.2, succeeds on 10.3
@exclusions.fails_if(config.requirements.mysql_check_col_name_change)
def test_rename_column_boolean(self):
super().test_rename_column_boolean()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
class BatchRoundTripPostgresqlTest(BatchRoundTripTest):
__only_on__ = "postgresql"
__backend__ = True
def _native_boolean_fixture(self):
t = Table(
"has_native_bool",
self.metadata,
Column(
"x",
Boolean(create_constraint=True),
server_default="false",
nullable=False,
),
Column("y", Integer),
)
with self.conn.begin():
t.create(self.conn)
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
@exclusions.fails()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
@exclusions.fails()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_add_col_table_has_native_boolean(self):
self._native_boolean_fixture()
# to ensure test coverage on SQLAlchemy 1.4 and above,
# force the create_constraint flag to True even though it
# defaults to false in 1.4. this test wants to ensure that the
# "should create" rule is consulted
def listen_for_reflect(inspector, table, column_info):
if isinstance(column_info["type"], Boolean):
column_info["type"].create_constraint = True
with self.op.batch_alter_table(
"has_native_bool",
recreate="always",
reflect_kwargs={
"listeners": [("column_reflect", listen_for_reflect)]
},
) as batch_op:
batch_op.add_column(Column("data", Integer))
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "data"
],
[Integer],
)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "x"
],
[Boolean],
)
class OfflineTest(TestBase):
@testing.fixture
def no_reflect_batch_fixture(self):
staging_env()
def go():
self.cfg = cfg = _no_sql_testing_config(dialect="sqlite")
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String, Table, MetaData
some_table_up = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('bar', String)
)
some_table_down = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('foo', Integer)
)
def upgrade():
with op.batch_alter_table("some_table", copy_from=some_table_up) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table", copy_from=some_table_down) as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
""" # noqa: E501
% a,
)
yield go
clear_staging_env()
@testing.fixture
def batch_fixture(self):
staging_env()
def go(dialect):
self.cfg = cfg = _no_sql_testing_config(dialect=dialect)
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String
def upgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
"""
% a,
)
yield go
clear_staging_env()
def test_upgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"ALTER TABLE some_table ADD COLUMN foo INTEGER", buf.getvalue()
)
def test_downgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"ALTER TABLE some_table DROP COLUMN foo", buf.getvalue()
)
def test_upgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.upgrade(self.cfg, self.a, sql=True)
def test_downgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
def test_upgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
def test_downgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | afaict this means that the test didn't test what it thought it was testing | jsoref | 6 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_batch.py | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import JSON
from sqlalchemy import MetaData
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import Text
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects import sqlite as sqlite_dialect
from sqlalchemy.schema import CreateIndex
from sqlalchemy.schema import CreateTable
from sqlalchemy.sql import column
from sqlalchemy.sql import text
from alembic import command
from alembic import testing
from alembic import util
from alembic.ddl import sqlite
from alembic.operations import Operations
from alembic.operations.batch import ApplyBatchImpl
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import exclusions
from alembic.testing import expect_raises_message
from alembic.testing import is_
from alembic.testing import mock
from alembic.testing import TestBase
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import op_fixture
from alembic.util import CommandError
from alembic.util import exc as alembic_exc
from alembic.util.sqla_compat import _NONE_NAME
from alembic.util.sqla_compat import _safe_commit_connection_transaction
from alembic.util.sqla_compat import _select
from alembic.util.sqla_compat import has_computed
from alembic.util.sqla_compat import has_identity
from alembic.util.sqla_compat import sqla_14
if has_computed:
from alembic.util.sqla_compat import Computed
if has_identity:
from alembic.util.sqla_compat import Identity
class BatchApplyTest(TestBase):
def setUp(self):
self.op = Operations(mock.Mock(opts={}))
self.impl = sqlite.SQLiteImpl(
sqlite_dialect.dialect(), None, False, False, None, {}
)
def _simple_fixture(self, table_args=(), table_kwargs={}, **kw):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String(10)),
Column("y", Integer),
)
return ApplyBatchImpl(
self.impl, t, table_args, table_kwargs, False, **kw
)
def _uq_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
UniqueConstraint("y", name="uq1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_table_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
CheckConstraint("y > 5", name="ck1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_col_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer, CheckConstraint("y > 5", name="ck1")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _ix_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
Index("ix1", "y"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _pk_fixture(self):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer),
Column("x", String()),
Column("y", Integer),
PrimaryKeyConstraint("id", name="mypk"),
)
return ApplyBatchImpl(self.impl, t, (), {}, False)
def _literal_ck_fixture(
self, copy_from=None, table_args=(), table_kwargs={}
):
m = MetaData()
if copy_from is not None:
t = copy_from
else:
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
CheckConstraint("email LIKE '%@%'"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _sql_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
)
t.append_constraint(CheckConstraint(t.c.email.like("%@%")))
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _multi_fk_fixture(self, table_args=(), table_kwargs={}, schema=None):
m = MetaData()
if schema:
schemaarg = "%s." % schema
else:
schemaarg = ""
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id_1", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_2", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_3", Integer),
Column("user_id_version", Integer),
ForeignKeyConstraint(
["user_id_3", "user_id_version"],
["%suser.id" % schemaarg, "%suser.id_version" % schemaarg],
),
schema=schema,
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id", name="ufk")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _selfref_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("parent_id", Integer, ForeignKey("tname.id")),
Column("data", String),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_no_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _enum_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", Enum("a", "b", "c", create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _server_default_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", String(), server_default=""),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _assert_impl(
self,
impl,
colnames=None,
ddl_contains=None,
ddl_not_contains=None,
dialect="default",
schema=None,
):
context = op_fixture(dialect=dialect)
impl._create(context.impl)
if colnames is None:
colnames = ["id", "x", "y"]
eq_(impl.new_table.c.keys(), colnames)
pk_cols = [col for col in impl.new_table.c if col.primary_key]
eq_(list(impl.new_table.primary_key), pk_cols)
create_stmt = str(
CreateTable(impl.new_table).compile(dialect=context.dialect)
)
create_stmt = re.sub(r"[\n\t]", "", create_stmt)
idx_stmt = ""
# create indexes; these should be created in terms of the
# final table name
impl.new_table.name = impl.table.name
for idx in impl._gather_indexes_from_both_tables():
idx_stmt += str(CreateIndex(idx).compile(dialect=context.dialect))
idx_stmt = re.sub(r"[\n\t]", "", idx_stmt)
# revert new table name to the temp name, assertions below
# are looking for the temp name
impl.new_table.name = ApplyBatchImpl._calc_temp_name(impl.table.name)
if ddl_contains:
assert ddl_contains in create_stmt + idx_stmt
if ddl_not_contains:
assert ddl_not_contains not in create_stmt + idx_stmt
expected = [create_stmt]
if schema:
args = {"schema": "%s." % schema}
else:
args = {"schema": ""}
args["temp_name"] = impl.new_table.name
args["colnames"] = ", ".join(
[
impl.new_table.c[name].name
for name in colnames
if name in impl.table.c
]
)
args["tname_colnames"] = ", ".join(
"CAST(%(schema)stname.%(name)s AS %(type)s) AS %(cast_label)s"
% {
"schema": args["schema"],
"name": name,
"type": impl.new_table.c[name].type,
"cast_label": name if sqla_14 else "anon_1",
}
if (
impl.new_table.c[name].type._type_affinity
is not impl.table.c[name].type._type_affinity
)
else "%(schema)stname.%(name)s"
% {"schema": args["schema"], "name": name}
for name in colnames
if name in impl.table.c
)
expected.extend(
[
"INSERT INTO %(schema)s%(temp_name)s (%(colnames)s) "
"SELECT %(tname_colnames)s FROM %(schema)stname" % args,
"DROP TABLE %(schema)stname" % args,
"ALTER TABLE %(schema)s%(temp_name)s "
"RENAME TO %(schema)stname" % args,
]
)
if idx_stmt:
expected.append(idx_stmt)
context.assert_(*expected)
return impl.new_table
def test_change_type(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", type_=String)
new_table = self._assert_impl(impl)
assert new_table.c.x.type._type_affinity is String
def test_rename_col(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", name="q")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.name, "q")
def test_rename_col_w_index(self):
impl = self._ix_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(
impl, ddl_contains="CREATE INDEX ix1 ON tname (y2)"
)
eq_(new_table.c.y.name, "y2")
def test_rename_col_w_uq(self):
impl = self._uq_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(impl, ddl_contains="UNIQUE (y2)")
eq_(new_table.c.y.name, "y2")
def test_alter_column_comment(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", comment="some comment")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.comment, "some comment")
def test_add_column_comment(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("q", Integer, comment="some comment"))
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "q"])
eq_(new_table.c.q.comment, "some comment")
def test_rename_col_boolean(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (bflag IN (0, 1)",
colnames=["id", "flag"],
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_change_type_schematype_to_non(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", type_=Integer)
new_table = self._assert_impl(
impl, colnames=["id", "flag"], ddl_not_contains="CHECK"
)
assert new_table.c.flag.type._type_affinity is Integer
# NOTE: we can't do test_change_type_non_to_schematype
# at this level because the "add_constraint" part of this
# comes from toimpl.py, which we aren't testing here
def test_rename_col_boolean_no_ck(self):
impl = self._boolean_no_ck_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl, ddl_not_contains="CHECK", colnames=["id", "flag"]
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
0,
)
def test_rename_col_enum(self):
impl = self._enum_fixture()
impl.alter_column("tname", "thing", name="thang")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (thang IN ('a', 'b', 'c')",
colnames=["id", "thing"],
)
eq_(new_table.c.thing.name, "thang")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_rename_col_literal_ck(self):
impl = self._literal_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
# note this is wrong, we don't dig into the SQL
impl,
ddl_contains="CHECK (email LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_literal_ck_workaround(self):
impl = self._literal_ck_fixture(
copy_from=Table(
"tname",
MetaData(),
Column("id", Integer, primary_key=True),
Column("email", String),
),
table_args=[CheckConstraint("emol LIKE '%@%'")],
)
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_sql_ck(self):
impl = self._sql_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_add_col(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("x", "id", "y")])
new_table = self._assert_impl(impl, colnames=["x", "id", "y"])
eq_(new_table.c.x.name, "x")
def test_add_col_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("id", "x", "g", "y")])
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col, insert_before="x")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_beginning(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="id")
new_table = self._assert_impl(impl, colnames=["g", "id", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_penultimate(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="x")
self._assert_impl(impl, colnames=["id", "x", "g", "y"])
def test_add_col_insert_after_end(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_plus_no_order(self):
impl = self._simple_fixture()
# operations.add_column produces a table
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer))
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_no_order_plus_insert_after(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", Column("q", Integer))
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_after="g")
new_table = self._assert_impl(
impl, colnames=["id", "g", "q", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_before="g")
new_table = self._assert_impl(
impl, colnames=["id", "q", "g", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_server_default(self):
impl = self._simple_fixture()
impl.alter_column("tname", "y", server_default="10")
new_table = self._assert_impl(impl, ddl_contains="DEFAULT '10'")
eq_(new_table.c.y.server_default.arg, "10")
def test_drop_server_default(self):
impl = self._server_default_fixture()
impl.alter_column("tname", "thing", server_default=None)
new_table = self._assert_impl(
impl, colnames=["id", "thing"], ddl_not_contains="DEFAULT"
)
eq_(new_table.c.thing.server_default, None)
def test_rename_col_pk(self):
impl = self._simple_fixture()
impl.alter_column("tname", "id", name="foobar")
new_table = self._assert_impl(
impl, ddl_contains="PRIMARY KEY (foobar)"
)
eq_(new_table.c.id.name, "foobar")
eq_(list(new_table.primary_key), [new_table.c.id])
def test_rename_col_fk(self):
impl = self._fk_fixture()
impl.alter_column("tname", "user_id", name="foobar")
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_contains='FOREIGN KEY(foobar) REFERENCES "user" (id)',
)
eq_(new_table.c.user_id.name, "foobar")
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_regen_multi_fk(self):
impl = self._multi_fk_fixture()
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES "user" (id, id_version)',
)
def test_regen_multi_fk_schema(self):
impl = self._multi_fk_fixture(schema="foo_schema")
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES foo_schema."user" (id, id_version)',
schema="foo_schema",
)
def test_do_not_add_existing_columns_columns(self):
impl = self._multi_fk_fixture()
meta = impl.table.metadata
cid = Column("id", Integer())
user = Table("user", meta, cid)
fk = [
c
for c in impl.unnamed_constraints
if isinstance(c, ForeignKeyConstraint)
]
impl._setup_referent(meta, fk[0])
is_(user.c.id, cid)
def test_drop_col(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("x"))
new_table = self._assert_impl(impl, colnames=["id", "y"])
assert "y" in new_table.c
assert "x" not in new_table.c
def test_drop_col_remove_pk(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("id"))
new_table = self._assert_impl(
impl, colnames=["x", "y"], ddl_not_contains="PRIMARY KEY"
)
assert "y" in new_table.c
assert "id" not in new_table.c
assert not new_table.primary_key
def test_drop_col_remove_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("user_id"))
new_table = self._assert_impl(
impl, colnames=["id", "email"], ddl_not_contains="FOREIGN KEY"
)
assert "user_id" not in new_table.c
assert not new_table.foreign_keys
def test_drop_col_retain_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("email"))
new_table = self._assert_impl(
impl,
colnames=["id", "user_id"],
ddl_contains='FOREIGN KEY(user_id) REFERENCES "user" (id)',
)
assert "email" not in new_table.c
assert new_table.c.user_id.foreign_keys
def test_drop_col_retain_fk_selfref(self):
impl = self._selfref_fk_fixture()
impl.drop_column("tname", column("data"))
new_table = self._assert_impl(impl, colnames=["id", "parent_id"])
assert "data" not in new_table.c
assert new_table.c.parent_id.foreign_keys
def test_add_fk(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("user_id", Integer))
fk = self.op.schema_obj.foreign_key_constraint(
"fk1", "tname", "user", ["user_id"], ["id"]
)
impl.add_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "x", "y", "user_id"],
ddl_contains="CONSTRAINT fk1 FOREIGN KEY(user_id) "
'REFERENCES "user" (id)',
)
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_drop_fk(self):
impl = self._named_fk_fixture()
fk = ForeignKeyConstraint([], [], name="ufk")
impl.drop_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_not_contains="CONSTRANT fk1",
)
eq_(list(new_table.foreign_keys), [])
def test_add_uq(self):
impl = self._simple_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.add_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT uq1 UNIQUE",
)
def test_drop_uq(self):
impl = self._uq_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.drop_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_ck_unnamed(self):
"""test for #1195"""
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint(_NONE_NAME, "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CHECK (y > 5)",
)
def test_add_ck(self):
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_table(self):
impl = self._named_ck_table_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_col(self):
impl = self._named_ck_col_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_create_index(self):
impl = self._simple_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.create_index(ix)
self._assert_impl(
impl, colnames=["id", "x", "y"], ddl_contains="CREATE INDEX ix1"
)
def test_drop_index(self):
impl = self._ix_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.drop_index(ix)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_table_opts(self):
impl = self._simple_fixture(table_kwargs={"mysql_engine": "InnoDB"})
self._assert_impl(impl, ddl_contains="ENGINE=InnoDB", dialect="mysql")
def test_drop_pk(self):
impl = self._pk_fixture()
pk = self.op.schema_obj.primary_key_constraint("mypk", "tname", ["id"])
impl.drop_constraint(pk)
new_table = self._assert_impl(impl)
assert not new_table.c.id.primary_key
assert not len(new_table.primary_key)
class BatchAPITest(TestBase):
@contextmanager
def _fixture(self, schema=None):
migration_context = mock.Mock(
opts={},
impl=mock.MagicMock(__dialect__="sqlite", connection=object()),
)
op = Operations(migration_context)
batch = op.batch_alter_table(
"tname", recreate="never", schema=schema
).__enter__()
mock_schema = mock.MagicMock()
with mock.patch("alembic.operations.schemaobj.sa_schema", mock_schema):
yield batch
batch.impl.flush()
self.mock_schema = mock_schema
def test_drop_col(self):
with self._fixture() as batch:
batch.drop_column("q")
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.drop_column(
"tname", self.mock_schema.Column(), schema=None
)
],
)
def test_add_col(self):
column = Column("w", String(50))
with self._fixture() as batch:
batch.add_column(column)
assert (
mock.call.add_column("tname", column, schema=None)
in batch.impl.operations.impl.mock_calls
)
def test_create_fk(self):
with self._fixture() as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_fk_schema(self):
with self._fixture(schema="foo") as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema="foo",
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_uq(self):
with self._fixture() as batch:
batch.create_unique_constraint("uq1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.UniqueConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="uq1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.UniqueConstraint())],
)
def test_create_pk(self):
with self._fixture() as batch:
batch.create_primary_key("pk1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.PrimaryKeyConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="pk1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.PrimaryKeyConstraint()
)
],
)
def test_create_check(self):
expr = text("a > b")
with self._fixture() as batch:
batch.create_check_constraint("ck1", expr)
eq_(
self.mock_schema.CheckConstraint.mock_calls,
[mock.call(expr, name="ck1")],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.CheckConstraint())],
)
def test_drop_constraint(self):
with self._fixture() as batch:
batch.drop_constraint("uq1")
eq_(self.mock_schema.Constraint.mock_calls, [mock.call(name="uq1")])
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.drop_constraint(self.mock_schema.Constraint())],
)
class CopyFromTest(TestBase):
def _fixture(self):
self.metadata = MetaData()
self.table = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
)
context = op_fixture(dialect="sqlite", as_sql=True)
self.op = Operations(context)
return context
def test_change_type(self):
context = self._fixture()
self.table.append_column(Column("toj", Text))
self.table.append_column(Column("fromj", JSON))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.alter_column("toj", type_=JSON)
batch_op.alter_column("fromj", type_=Text)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, toj JSON, fromj TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, toj, fromj) "
"SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x, foo.toj, "
"CAST(foo.fromj AS TEXT) AS %s FROM foo"
% (
("data" if sqla_14 else "anon_1"),
("fromj" if sqla_14 else "anon_2"),
),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_from_schematype(self):
context = self._fixture()
self.table.append_column(
Column("y", Boolean(create_constraint=True, name="ck1"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS INTEGER) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_name_from_existing_variant_type(self):
"""test #982"""
context = self._fixture()
self.table.append_column(
Column("y", Text().with_variant(Text(10000), "mysql"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
column_name="y",
new_column_name="q",
existing_type=Text().with_variant(Text(10000), "mysql"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, q TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, q) "
"SELECT foo.id, foo.data, foo.x, foo.y FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_to_schematype(self):
context = self._fixture()
self.table.append_column(Column("y", Integer))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
existing_type=Integer,
type_=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y BOOLEAN, PRIMARY KEY (id), "
"CONSTRAINT ck1 CHECK (y IN (0, 1)))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS BOOLEAN) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_w_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), "
"x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_wo_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_("CREATE UNIQUE INDEX ix_data ON foo (data)")
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_("DROP INDEX ix_data")
def test_create_drop_index_w_other_ops(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x FROM foo"
% (("data" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
batch_op.alter_column("data", type_=String)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
class BatchRoundTripTest(TestBase):
__only_on__ = "sqlite"
def setUp(self):
self.conn = config.db.connect()
self.metadata = MetaData()
t1 = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
mysql_engine="InnoDB",
)
with self.conn.begin():
t1.create(self.conn)
self.conn.execute(
t1.insert(),
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
],
)
context = MigrationContext.configure(self.conn)
self.op = Operations(context)
def tearDown(self):
# why commit? because SQLite has inconsistent treatment
# of transactional DDL. A test that runs CREATE TABLE and then
# ALTER TABLE to change the name of that table, will end up
# committing the CREATE TABLE but not the ALTER. As batch mode
# does this with a temp table name that's not even in the
# metadata collection, we don't have an explicit drop for it
# (though we could do that too). calling commit means the
# ALTER will go through and the drop_all() will then catch it.
_safe_commit_connection_transaction(self.conn)
with self.conn.begin():
self.metadata.drop_all(self.conn)
self.conn.close()
@contextmanager
def _sqlite_referential_integrity(self):
self.conn.exec_driver_sql("PRAGMA foreign_keys=ON")
try:
yield
finally:
self.conn.exec_driver_sql("PRAGMA foreign_keys=OFF")
# as these tests are typically intentional fails, clean out
# tables left over
m = MetaData()
m.reflect(self.conn)
with self.conn.begin():
m.drop_all(self.conn)
def _no_pk_fixture(self):
with self.conn.begin():
nopk = Table(
"nopk",
self.metadata,
Column("a", Integer),
Column("b", Integer),
Column("c", Integer),
mysql_engine="InnoDB",
)
nopk.create(self.conn)
self.conn.execute(
nopk.insert(),
[{"a": 1, "b": 2, "c": 3}, {"a": 2, "b": 4, "c": 5}],
)
return nopk
def _table_w_index_fixture(self):
with self.conn.begin():
t = Table(
"t_w_ix",
self.metadata,
Column("id", Integer, primary_key=True),
Column("thing", Integer),
Column("data", String(20)),
)
Index("ix_thing", t.c.thing)
t.create(self.conn)
return t
def _boolean_fixture(self):
with self.conn.begin():
t = Table(
"hasbool",
self.metadata,
Column("x", Boolean(create_constraint=True, name="ck1")),
Column("y", Integer),
)
t.create(self.conn)
def _timestamp_fixture(self):
with self.conn.begin():
t = Table("hasts", self.metadata, Column("x", DateTime()))
t.create(self.conn)
return t
def _ck_constraint_fixture(self):
with self.conn.begin():
t = Table(
"ck_table",
self.metadata,
Column("id", Integer, nullable=False),
CheckConstraint("id is not NULL", name="ck"),
)
t.create(self.conn)
return t
def _datetime_server_default_fixture(self):
return func.datetime("now", "localtime")
def _timestamp_w_expr_default_fixture(self):
with self.conn.begin():
t = Table(
"hasts",
self.metadata,
Column(
"x",
DateTime(),
server_default=self._datetime_server_default_fixture(),
nullable=False,
),
)
t.create(self.conn)
return t
def _int_to_boolean_fixture(self):
with self.conn.begin():
t = Table("hasbool", self.metadata, Column("x", Integer))
t.create(self.conn)
def test_add_constraint_type(self):
"""test for #1195."""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("q", Boolean(create_constraint=True)))
insp = inspect(self.conn)
assert {
c["type"]._type_affinity
for c in insp.get_columns("foo")
if c["name"] == "q"
}.intersection([Boolean, Integer])
def test_change_type_boolean_to_int(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def test_no_net_change_timestamp(self):
t = self._timestamp_fixture()
import datetime
with self.conn.begin():
self.conn.execute(
t.insert(), {"x": datetime.datetime(2012, 5, 18, 15, 32, 5)}
)
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column("x", type_=DateTime())
eq_(
self.conn.execute(_select(t.c.x)).fetchall(),
[(datetime.datetime(2012, 5, 18, 15, 32, 5),)],
)
def test_no_net_change_timestamp_w_default(self):
t = self._timestamp_w_expr_default_fixture()
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column(
"x",
type_=DateTime(),
nullable=False,
server_default=self._datetime_server_default_fixture(),
)
with self.conn.begin():
self.conn.execute(t.insert())
res = self.conn.execute(_select(t.c.x))
if sqla_14:
assert res.scalar_one_or_none() is not None
else:
row = res.fetchone()
assert row["x"] is not None
def test_drop_col_schematype(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.drop_column(
"x", existing_type=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
assert "x" not in (c["name"] for c in insp.get_columns("hasbool"))
def test_change_type_int_to_boolean(self):
self._int_to_boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x", type_=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
if exclusions.against(config, "sqlite"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Boolean],
)
elif exclusions.against(config, "mysql"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def _assert_data(self, data, tablename="foo"):
res = self.conn.execute(text("select * from %s" % tablename))
if sqla_14:
res = res.mappings()
eq_([dict(row) for row in res], data)
def test_ix_existing(self):
self._table_w_index_fixture()
with self.op.batch_alter_table("t_w_ix") as batch_op:
batch_op.alter_column("data", type_=String(30))
batch_op.create_index("ix_data", ["data"])
insp = inspect(self.conn)
eq_(
{
(ix["name"], tuple(ix["column_names"]))
for ix in insp.get_indexes("t_w_ix")
},
{("ix_data", ("data",)), ("ix_thing", ("thing",))},
)
def test_fk_points_to_me_auto(self):
self._test_fk_points_to_me("auto")
# in particular, this tests that the failures
# on PG and MySQL result in recovery of the batch system,
# e.g. that the _alembic_tmp_temp table is dropped
@config.requirements.no_referential_integrity
def test_fk_points_to_me_recreate(self):
self._test_fk_points_to_me("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_fk_points_to_me_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_fk_points_to_me("auto")
def _test_fk_points_to_me(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("foo", ["id"], ["foo_id"])],
)
def test_selfref_fk_auto(self):
self._test_selfref_fk("auto")
@config.requirements.no_referential_integrity
def test_selfref_fk_recreate(self):
self._test_selfref_fk("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_selfref_fk_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_selfref_fk("auto")
def _test_selfref_fk(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("bar_id", Integer, ForeignKey("bar.id")),
Column("data", String(50)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(
bar.insert(), {"id": 1, "data": "x", "bar_id": None}
)
self.conn.execute(
bar.insert(), {"id": 2, "data": "y", "bar_id": 1}
)
with self.op.batch_alter_table("bar", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("bar", ["id"], ["bar_id"])],
)
def test_change_type(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("data", type_=Integer)
self._assert_data(
[
{"id": 1, "data": 0, "x": 5},
{"id": 2, "data": 22, "x": 6},
{"id": 3, "data": 8, "x": 7},
{"id": 4, "data": 9, "x": 8},
{"id": 5, "data": 0, "x": 9},
]
)
def test_drop_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def test_drop_pk_col_readd_col(self):
# drop a column, add it back without primary_key=True, should no
# longer be in the constraint
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], [])
def test_drop_pk_col_readd_pk_col(self):
# drop a column, add it back with primary_key=True, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer, primary_key=True))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
def test_drop_pk_col_readd_col_also_pk_const(self):
# drop a column, add it back without primary_key=True, but then
# also make anew PK constraint that includes it, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
batch_op.create_primary_key("newpk", ["id"])
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_pk_constraint(self, recreate):
self._no_pk_fixture()
with self.op.batch_alter_table("nopk", recreate=recreate) as batch_op:
batch_op.create_primary_key("newpk", ["a", "b"])
pk_const = inspect(self.conn).get_pk_constraint("nopk")
with config.requirements.reflects_pk_names.fail_if():
eq_(pk_const["name"], "newpk")
eq_(pk_const["constrained_columns"], ["a", "b"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_add_ck_constraint(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_check_constraint("newck", text("x > 0"))
ck_consts = inspect(self.conn).get_check_constraints("foo")
ck_consts[0]["sqltext"] = re.sub(
r"[\'\"`\(\)]", "", ck_consts[0]["sqltext"]
)
for ck in ck_consts:
ck.pop("comment", None)
eq_(ck_consts, [{"sqltext": "x > 0", "name": "newck"}])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint(self, recreate):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate=recreate
) as batch_op:
batch_op.drop_constraint("ck", type_="check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint_legacy_type(self):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate="always"
) as batch_op:
# matches the docs that were written for this originally
batch_op.drop_constraint("ck", "check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.unnamed_constraints
def test_drop_foreign_key(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
naming_convention = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s"
}
with self.op.batch_alter_table(
"bar", naming_convention=naming_convention
) as batch_op:
batch_op.drop_constraint("fk_bar_foo_id_foo", type_="foreignkey")
eq_(inspect(self.conn).get_foreign_keys("bar"), [])
def test_drop_column_fk_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def _assert_table_comment(self, tname, comment):
insp = inspect(self.conn)
tcomment = insp.get_table_comment(tname)
eq_(tcomment, {"text": comment})
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_unique_constraint("newuk", ["x"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x"]}],
)
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq_plus_col(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.add_column(Column("y", Integer))
batch_op.create_unique_constraint("newuk", ["x", "y"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x", "y"]}],
)
@config.requirements.comments
def test_add_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
self._assert_table_comment("foo", "some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment(
"some new comment", existing_comment="some comment"
)
self._assert_table_comment("foo", "some new comment")
@config.requirements.comments
def test_drop_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_table_comment(existing_comment="some comment")
self._assert_table_comment("foo", None)
def _assert_column_comment(self, tname, cname, comment):
insp = inspect(self.conn)
cols = {col["name"]: col for col in insp.get_columns(tname)}
eq_(cols[cname]["comment"], comment)
@config.requirements.comments
def test_add_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_add_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_alter_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column(
"x", existing_type=Integer(), comment="some comment"
)
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
@config.requirements.comments
def test_alter_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.alter_column("x", comment="some comment")
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
def test_rename_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("x", new_column_name="y")
self._assert_data(
[
{"id": 1, "data": "d1", "y": 5},
{"id": 2, "data": "22", "y": 6},
{"id": 3, "data": "8.5", "y": 7},
{"id": 4, "data": "9.46", "y": 8},
{"id": 5, "data": "d5", "y": 9},
]
)
def test_rename_column_boolean(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar") as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
# @config.requirements.check_constraint_reflection
def test_rename_column_boolean_named_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True, name="ck1")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar", recreate="always") as batch_op:
batch_op.alter_column(
"flag",
new_column_name="bflag",
existing_type=Boolean(create_constraint=True, name="ck1"),
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
@config.requirements.non_native_boolean
def test_rename_column_non_native_boolean_no_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
self.conn.execute(
# override Boolean type which as of 1.1 coerces numerics
# to 1/0
text("insert into bar (id, flag) values (:id, :flag)"),
{"id": 3, "flag": 5},
)
with self.op.batch_alter_table(
"bar",
reflect_args=[Column("flag", Boolean(create_constraint=False))],
) as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[
{"id": 1, "bflag": True},
{"id": 2, "bflag": False},
{"id": 3, "bflag": 5},
],
"bar",
)
def test_drop_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
self._assert_data(
[
{"data": "d1", "x": 5},
{"data": "22", "x": 6},
{"data": "8.5", "x": 7},
{"data": "9.46", "x": 8},
{"data": "d5", "x": 9},
]
)
def test_rename_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("id", new_column_name="ident")
self._assert_data(
[
{"ident": 1, "data": "d1", "x": 5},
{"ident": 2, "data": "22", "x": 6},
{"ident": 3, "data": "8.5", "x": 7},
{"ident": 4, "data": "9.46", "x": 8},
{"ident": 5, "data": "d5", "x": 9},
]
)
def test_add_column_auto(self):
# note this uses ALTER
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(config.db).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_add_column_auto_server_default_calculated(self):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2",
DateTime(),
server_default=self._datetime_server_default_fixture(),
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": mock.ANY},
{"id": 2, "data": "22", "x": 6, "data2": mock.ANY},
{"id": 3, "data": "8.5", "x": 7, "data2": mock.ANY},
{"id": 4, "data": "9.46", "x": 8, "data2": mock.ANY},
{"id": 5, "data": "d5", "x": 9, "data2": mock.ANY},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@testing.combinations((True,), (False,))
@testing.exclusions.only_on("sqlite")
@config.requirements.computed_columns
def test_add_column_auto_generated(self, persisted):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2", Integer, Computed("1 + 1", persisted=persisted)
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": 2},
{"id": 2, "data": "22", "x": 6, "data2": 2},
{"id": 3, "data": "8.5", "x": 7, "data2": 2},
{"id": 4, "data": "9.46", "x": 8, "data2": 2},
{"id": 5, "data": "d5", "x": 9, "data2": 2},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@config.requirements.identity_columns
def test_add_column_auto_identity(self):
"""test #883"""
self._no_pk_fixture()
with self.op.batch_alter_table("nopk") as batch_op:
batch_op.add_column(Column("id", Integer, Identity()))
self._assert_data(
[
{"a": 1, "b": 2, "c": 3, "id": 1},
{"a": 2, "b": 4, "c": 5, "id": 2},
],
tablename="nopk",
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x"],
)
def test_add_column_insert_before_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data2", "data", "x"],
)
def test_add_column_insert_after_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_after="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "data2", "x"],
)
def test_add_column_insert_before_raise_on_alter(self):
def go():
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
assert_raises_message(
alembic_exc.CommandError,
"Can't specify insert_before or insert_after when using ALTER",
go,
)
def test_add_column_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_create_drop_index(self):
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
insp = inspect(self.conn)
eq_(
[
dict(
unique=ix["unique"],
name=ix["name"],
column_names=ix["column_names"],
)
for ix in insp.get_indexes("foo")
],
[{"unique": True, "name": "ix_data", "column_names": ["data"]}],
)
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_index("ix_data")
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
class BatchRoundTripMySQLTest(BatchRoundTripTest):
__only_on__ = "mysql", "mariadb"
__backend__ = True
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_rename_column_pk(self):
super().test_rename_column_pk()
@exclusions.fails()
def test_rename_column(self):
super().test_rename_column()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
# fails on mariadb 10.2, succeeds on 10.3
@exclusions.fails_if(config.requirements.mysql_check_col_name_change)
def test_rename_column_boolean(self):
super().test_rename_column_boolean()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
class BatchRoundTripPostgresqlTest(BatchRoundTripTest):
__only_on__ = "postgresql"
__backend__ = True
def _native_boolean_fixture(self):
t = Table(
"has_native_bool",
self.metadata,
Column(
"x",
Boolean(create_constraint=True),
server_default="false",
nullable=False,
),
Column("y", Integer),
)
with self.conn.begin():
t.create(self.conn)
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
@exclusions.fails()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
@exclusions.fails()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_add_col_table_has_native_boolean(self):
self._native_boolean_fixture()
# to ensure test coverage on SQLAlchemy 1.4 and above,
# force the create_constraint flag to True even though it
# defaults to false in 1.4. this test wants to ensure that the
# "should create" rule is consulted
def listen_for_reflect(inspector, table, column_info):
if isinstance(column_info["type"], Boolean):
column_info["type"].create_constraint = True
with self.op.batch_alter_table(
"has_native_bool",
recreate="always",
reflect_kwargs={
"listeners": [("column_reflect", listen_for_reflect)]
},
) as batch_op:
batch_op.add_column(Column("data", Integer))
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "data"
],
[Integer],
)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "x"
],
[Boolean],
)
class OfflineTest(TestBase):
@testing.fixture
def no_reflect_batch_fixture(self):
staging_env()
def go():
self.cfg = cfg = _no_sql_testing_config(dialect="sqlite")
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String, Table, MetaData
some_table_up = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('bar', String)
)
some_table_down = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('foo', Integer)
)
def upgrade():
with op.batch_alter_table("some_table", copy_from=some_table_up) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table", copy_from=some_table_down) as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
""" # noqa: E501
% a,
)
yield go
clear_staging_env()
@testing.fixture
def batch_fixture(self):
staging_env()
def go(dialect):
self.cfg = cfg = _no_sql_testing_config(dialect=dialect)
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String
def upgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
"""
% a,
)
yield go
clear_staging_env()
def test_upgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"ALTER TABLE some_table ADD COLUMN foo INTEGER", buf.getvalue()
)
def test_downgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"ALTER TABLE some_table DROP COLUMN foo", buf.getvalue()
)
def test_upgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.upgrade(self.cfg, self.a, sql=True)
def test_downgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
def test_upgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
def test_downgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
| from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import JSON
from sqlalchemy import MetaData
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import Text
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects import sqlite as sqlite_dialect
from sqlalchemy.schema import CreateIndex
from sqlalchemy.schema import CreateTable
from sqlalchemy.sql import column
from sqlalchemy.sql import text
from alembic import command
from alembic import testing
from alembic import util
from alembic.ddl import sqlite
from alembic.operations import Operations
from alembic.operations.batch import ApplyBatchImpl
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import exclusions
from alembic.testing import expect_raises_message
from alembic.testing import is_
from alembic.testing import mock
from alembic.testing import TestBase
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import op_fixture
from alembic.util import CommandError
from alembic.util import exc as alembic_exc
from alembic.util.sqla_compat import _NONE_NAME
from alembic.util.sqla_compat import _safe_commit_connection_transaction
from alembic.util.sqla_compat import _select
from alembic.util.sqla_compat import has_computed
from alembic.util.sqla_compat import has_identity
from alembic.util.sqla_compat import sqla_14
if has_computed:
from alembic.util.sqla_compat import Computed
if has_identity:
from alembic.util.sqla_compat import Identity
class BatchApplyTest(TestBase):
def setUp(self):
self.op = Operations(mock.Mock(opts={}))
self.impl = sqlite.SQLiteImpl(
sqlite_dialect.dialect(), None, False, False, None, {}
)
def _simple_fixture(self, table_args=(), table_kwargs={}, **kw):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String(10)),
Column("y", Integer),
)
return ApplyBatchImpl(
self.impl, t, table_args, table_kwargs, False, **kw
)
def _uq_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
UniqueConstraint("y", name="uq1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_table_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
CheckConstraint("y > 5", name="ck1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_col_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer, CheckConstraint("y > 5", name="ck1")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _ix_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
Index("ix1", "y"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _pk_fixture(self):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer),
Column("x", String()),
Column("y", Integer),
PrimaryKeyConstraint("id", name="mypk"),
)
return ApplyBatchImpl(self.impl, t, (), {}, False)
def _literal_ck_fixture(
self, copy_from=None, table_args=(), table_kwargs={}
):
m = MetaData()
if copy_from is not None:
t = copy_from
else:
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
CheckConstraint("email LIKE '%@%'"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _sql_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
)
t.append_constraint(CheckConstraint(t.c.email.like("%@%")))
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _multi_fk_fixture(self, table_args=(), table_kwargs={}, schema=None):
m = MetaData()
if schema:
schemaarg = "%s." % schema
else:
schemaarg = ""
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id_1", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_2", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_3", Integer),
Column("user_id_version", Integer),
ForeignKeyConstraint(
["user_id_3", "user_id_version"],
["%suser.id" % schemaarg, "%suser.id_version" % schemaarg],
),
schema=schema,
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id", name="ufk")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _selfref_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("parent_id", Integer, ForeignKey("tname.id")),
Column("data", String),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_no_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _enum_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", Enum("a", "b", "c", create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _server_default_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", String(), server_default=""),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _assert_impl(
self,
impl,
colnames=None,
ddl_contains=None,
ddl_not_contains=None,
dialect="default",
schema=None,
):
context = op_fixture(dialect=dialect)
impl._create(context.impl)
if colnames is None:
colnames = ["id", "x", "y"]
eq_(impl.new_table.c.keys(), colnames)
pk_cols = [col for col in impl.new_table.c if col.primary_key]
eq_(list(impl.new_table.primary_key), pk_cols)
create_stmt = str(
CreateTable(impl.new_table).compile(dialect=context.dialect)
)
create_stmt = re.sub(r"[\n\t]", "", create_stmt)
idx_stmt = ""
# create indexes; these should be created in terms of the
# final table name
impl.new_table.name = impl.table.name
for idx in impl._gather_indexes_from_both_tables():
idx_stmt += str(CreateIndex(idx).compile(dialect=context.dialect))
idx_stmt = re.sub(r"[\n\t]", "", idx_stmt)
# revert new table name to the temp name, assertions below
# are looking for the temp name
impl.new_table.name = ApplyBatchImpl._calc_temp_name(impl.table.name)
if ddl_contains:
assert ddl_contains in create_stmt + idx_stmt
if ddl_not_contains:
assert ddl_not_contains not in create_stmt + idx_stmt
expected = [create_stmt]
if schema:
args = {"schema": "%s." % schema}
else:
args = {"schema": ""}
args["temp_name"] = impl.new_table.name
args["colnames"] = ", ".join(
[
impl.new_table.c[name].name
for name in colnames
if name in impl.table.c
]
)
args["tname_colnames"] = ", ".join(
"CAST(%(schema)stname.%(name)s AS %(type)s) AS %(cast_label)s"
% {
"schema": args["schema"],
"name": name,
"type": impl.new_table.c[name].type,
"cast_label": name if sqla_14 else "anon_1",
}
if (
impl.new_table.c[name].type._type_affinity
is not impl.table.c[name].type._type_affinity
)
else "%(schema)stname.%(name)s"
% {"schema": args["schema"], "name": name}
for name in colnames
if name in impl.table.c
)
expected.extend(
[
"INSERT INTO %(schema)s%(temp_name)s (%(colnames)s) "
"SELECT %(tname_colnames)s FROM %(schema)stname" % args,
"DROP TABLE %(schema)stname" % args,
"ALTER TABLE %(schema)s%(temp_name)s "
"RENAME TO %(schema)stname" % args,
]
)
if idx_stmt:
expected.append(idx_stmt)
context.assert_(*expected)
return impl.new_table
def test_change_type(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", type_=String)
new_table = self._assert_impl(impl)
assert new_table.c.x.type._type_affinity is String
def test_rename_col(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", name="q")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.name, "q")
def test_rename_col_w_index(self):
impl = self._ix_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(
impl, ddl_contains="CREATE INDEX ix1 ON tname (y2)"
)
eq_(new_table.c.y.name, "y2")
def test_rename_col_w_uq(self):
impl = self._uq_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(impl, ddl_contains="UNIQUE (y2)")
eq_(new_table.c.y.name, "y2")
def test_alter_column_comment(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", comment="some comment")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.comment, "some comment")
def test_add_column_comment(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("q", Integer, comment="some comment"))
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "q"])
eq_(new_table.c.q.comment, "some comment")
def test_rename_col_boolean(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (bflag IN (0, 1)",
colnames=["id", "flag"],
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_change_type_schematype_to_non(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", type_=Integer)
new_table = self._assert_impl(
impl, colnames=["id", "flag"], ddl_not_contains="CHECK"
)
assert new_table.c.flag.type._type_affinity is Integer
# NOTE: we can't do test_change_type_non_to_schematype
# at this level because the "add_constraint" part of this
# comes from toimpl.py, which we aren't testing here
def test_rename_col_boolean_no_ck(self):
impl = self._boolean_no_ck_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl, ddl_not_contains="CHECK", colnames=["id", "flag"]
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
0,
)
def test_rename_col_enum(self):
impl = self._enum_fixture()
impl.alter_column("tname", "thing", name="thang")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (thang IN ('a', 'b', 'c')",
colnames=["id", "thing"],
)
eq_(new_table.c.thing.name, "thang")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_rename_col_literal_ck(self):
impl = self._literal_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
# note this is wrong, we don't dig into the SQL
impl,
ddl_contains="CHECK (email LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_literal_ck_workaround(self):
impl = self._literal_ck_fixture(
copy_from=Table(
"tname",
MetaData(),
Column("id", Integer, primary_key=True),
Column("email", String),
),
table_args=[CheckConstraint("emol LIKE '%@%'")],
)
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_sql_ck(self):
impl = self._sql_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_add_col(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("x", "id", "y")])
new_table = self._assert_impl(impl, colnames=["x", "id", "y"])
eq_(new_table.c.x.name, "x")
def test_add_col_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("id", "x", "g", "y")])
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col, insert_before="x")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_beginning(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="id")
new_table = self._assert_impl(impl, colnames=["g", "id", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_penultimate(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="x")
self._assert_impl(impl, colnames=["id", "x", "g", "y"])
def test_add_col_insert_after_end(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_plus_no_order(self):
impl = self._simple_fixture()
# operations.add_column produces a table
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer))
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_no_order_plus_insert_after(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", Column("q", Integer))
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_after="g")
new_table = self._assert_impl(
impl, colnames=["id", "g", "q", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_before="g")
new_table = self._assert_impl(
impl, colnames=["id", "q", "g", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_server_default(self):
impl = self._simple_fixture()
impl.alter_column("tname", "y", server_default="10")
new_table = self._assert_impl(impl, ddl_contains="DEFAULT '10'")
eq_(new_table.c.y.server_default.arg, "10")
def test_drop_server_default(self):
impl = self._server_default_fixture()
impl.alter_column("tname", "thing", server_default=None)
new_table = self._assert_impl(
impl, colnames=["id", "thing"], ddl_not_contains="DEFAULT"
)
eq_(new_table.c.thing.server_default, None)
def test_rename_col_pk(self):
impl = self._simple_fixture()
impl.alter_column("tname", "id", name="foobar")
new_table = self._assert_impl(
impl, ddl_contains="PRIMARY KEY (foobar)"
)
eq_(new_table.c.id.name, "foobar")
eq_(list(new_table.primary_key), [new_table.c.id])
def test_rename_col_fk(self):
impl = self._fk_fixture()
impl.alter_column("tname", "user_id", name="foobar")
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_contains='FOREIGN KEY(foobar) REFERENCES "user" (id)',
)
eq_(new_table.c.user_id.name, "foobar")
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_regen_multi_fk(self):
impl = self._multi_fk_fixture()
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES "user" (id, id_version)',
)
def test_regen_multi_fk_schema(self):
impl = self._multi_fk_fixture(schema="foo_schema")
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES foo_schema."user" (id, id_version)',
schema="foo_schema",
)
def test_do_not_add_existing_columns_columns(self):
impl = self._multi_fk_fixture()
meta = impl.table.metadata
cid = Column("id", Integer())
user = Table("user", meta, cid)
fk = [
c
for c in impl.unnamed_constraints
if isinstance(c, ForeignKeyConstraint)
]
impl._setup_referent(meta, fk[0])
is_(user.c.id, cid)
def test_drop_col(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("x"))
new_table = self._assert_impl(impl, colnames=["id", "y"])
assert "y" in new_table.c
assert "x" not in new_table.c
def test_drop_col_remove_pk(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("id"))
new_table = self._assert_impl(
impl, colnames=["x", "y"], ddl_not_contains="PRIMARY KEY"
)
assert "y" in new_table.c
assert "id" not in new_table.c
assert not new_table.primary_key
def test_drop_col_remove_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("user_id"))
new_table = self._assert_impl(
impl, colnames=["id", "email"], ddl_not_contains="FOREIGN KEY"
)
assert "user_id" not in new_table.c
assert not new_table.foreign_keys
def test_drop_col_retain_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("email"))
new_table = self._assert_impl(
impl,
colnames=["id", "user_id"],
ddl_contains='FOREIGN KEY(user_id) REFERENCES "user" (id)',
)
assert "email" not in new_table.c
assert new_table.c.user_id.foreign_keys
def test_drop_col_retain_fk_selfref(self):
impl = self._selfref_fk_fixture()
impl.drop_column("tname", column("data"))
new_table = self._assert_impl(impl, colnames=["id", "parent_id"])
assert "data" not in new_table.c
assert new_table.c.parent_id.foreign_keys
def test_add_fk(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("user_id", Integer))
fk = self.op.schema_obj.foreign_key_constraint(
"fk1", "tname", "user", ["user_id"], ["id"]
)
impl.add_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "x", "y", "user_id"],
ddl_contains="CONSTRAINT fk1 FOREIGN KEY(user_id) "
'REFERENCES "user" (id)',
)
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_drop_fk(self):
impl = self._named_fk_fixture()
fk = ForeignKeyConstraint([], [], name="ufk")
impl.drop_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_not_contains="CONSTRAINT ufk",
)
eq_(list(new_table.foreign_keys), [])
def test_add_uq(self):
impl = self._simple_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.add_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT uq1 UNIQUE",
)
def test_drop_uq(self):
impl = self._uq_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.drop_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_ck_unnamed(self):
"""test for #1195"""
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint(_NONE_NAME, "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CHECK (y > 5)",
)
def test_add_ck(self):
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_table(self):
impl = self._named_ck_table_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_col(self):
impl = self._named_ck_col_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_create_index(self):
impl = self._simple_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.create_index(ix)
self._assert_impl(
impl, colnames=["id", "x", "y"], ddl_contains="CREATE INDEX ix1"
)
def test_drop_index(self):
impl = self._ix_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.drop_index(ix)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_table_opts(self):
impl = self._simple_fixture(table_kwargs={"mysql_engine": "InnoDB"})
self._assert_impl(impl, ddl_contains="ENGINE=InnoDB", dialect="mysql")
def test_drop_pk(self):
impl = self._pk_fixture()
pk = self.op.schema_obj.primary_key_constraint("mypk", "tname", ["id"])
impl.drop_constraint(pk)
new_table = self._assert_impl(impl)
assert not new_table.c.id.primary_key
assert not len(new_table.primary_key)
class BatchAPITest(TestBase):
@contextmanager
def _fixture(self, schema=None):
migration_context = mock.Mock(
opts={},
impl=mock.MagicMock(__dialect__="sqlite", connection=object()),
)
op = Operations(migration_context)
batch = op.batch_alter_table(
"tname", recreate="never", schema=schema
).__enter__()
mock_schema = mock.MagicMock()
with mock.patch("alembic.operations.schemaobj.sa_schema", mock_schema):
yield batch
batch.impl.flush()
self.mock_schema = mock_schema
def test_drop_col(self):
with self._fixture() as batch:
batch.drop_column("q")
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.drop_column(
"tname", self.mock_schema.Column(), schema=None
)
],
)
def test_add_col(self):
column = Column("w", String(50))
with self._fixture() as batch:
batch.add_column(column)
assert (
mock.call.add_column("tname", column, schema=None)
in batch.impl.operations.impl.mock_calls
)
def test_create_fk(self):
with self._fixture() as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_fk_schema(self):
with self._fixture(schema="foo") as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema="foo",
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_uq(self):
with self._fixture() as batch:
batch.create_unique_constraint("uq1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.UniqueConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="uq1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.UniqueConstraint())],
)
def test_create_pk(self):
with self._fixture() as batch:
batch.create_primary_key("pk1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.PrimaryKeyConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="pk1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.PrimaryKeyConstraint()
)
],
)
def test_create_check(self):
expr = text("a > b")
with self._fixture() as batch:
batch.create_check_constraint("ck1", expr)
eq_(
self.mock_schema.CheckConstraint.mock_calls,
[mock.call(expr, name="ck1")],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.CheckConstraint())],
)
def test_drop_constraint(self):
with self._fixture() as batch:
batch.drop_constraint("uq1")
eq_(self.mock_schema.Constraint.mock_calls, [mock.call(name="uq1")])
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.drop_constraint(self.mock_schema.Constraint())],
)
class CopyFromTest(TestBase):
def _fixture(self):
self.metadata = MetaData()
self.table = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
)
context = op_fixture(dialect="sqlite", as_sql=True)
self.op = Operations(context)
return context
def test_change_type(self):
context = self._fixture()
self.table.append_column(Column("toj", Text))
self.table.append_column(Column("fromj", JSON))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.alter_column("toj", type_=JSON)
batch_op.alter_column("fromj", type_=Text)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, toj JSON, fromj TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, toj, fromj) "
"SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x, foo.toj, "
"CAST(foo.fromj AS TEXT) AS %s FROM foo"
% (
("data" if sqla_14 else "anon_1"),
("fromj" if sqla_14 else "anon_2"),
),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_from_schematype(self):
context = self._fixture()
self.table.append_column(
Column("y", Boolean(create_constraint=True, name="ck1"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS INTEGER) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_name_from_existing_variant_type(self):
"""test #982"""
context = self._fixture()
self.table.append_column(
Column("y", Text().with_variant(Text(10000), "mysql"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
column_name="y",
new_column_name="q",
existing_type=Text().with_variant(Text(10000), "mysql"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, q TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, q) "
"SELECT foo.id, foo.data, foo.x, foo.y FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_to_schematype(self):
context = self._fixture()
self.table.append_column(Column("y", Integer))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
existing_type=Integer,
type_=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y BOOLEAN, PRIMARY KEY (id), "
"CONSTRAINT ck1 CHECK (y IN (0, 1)))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS BOOLEAN) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_w_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), "
"x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_wo_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_("CREATE UNIQUE INDEX ix_data ON foo (data)")
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_("DROP INDEX ix_data")
def test_create_drop_index_w_other_ops(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x FROM foo"
% (("data" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
batch_op.alter_column("data", type_=String)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
class BatchRoundTripTest(TestBase):
__only_on__ = "sqlite"
def setUp(self):
self.conn = config.db.connect()
self.metadata = MetaData()
t1 = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
mysql_engine="InnoDB",
)
with self.conn.begin():
t1.create(self.conn)
self.conn.execute(
t1.insert(),
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
],
)
context = MigrationContext.configure(self.conn)
self.op = Operations(context)
def tearDown(self):
# why commit? because SQLite has inconsistent treatment
# of transactional DDL. A test that runs CREATE TABLE and then
# ALTER TABLE to change the name of that table, will end up
# committing the CREATE TABLE but not the ALTER. As batch mode
# does this with a temp table name that's not even in the
# metadata collection, we don't have an explicit drop for it
# (though we could do that too). calling commit means the
# ALTER will go through and the drop_all() will then catch it.
_safe_commit_connection_transaction(self.conn)
with self.conn.begin():
self.metadata.drop_all(self.conn)
self.conn.close()
@contextmanager
def _sqlite_referential_integrity(self):
self.conn.exec_driver_sql("PRAGMA foreign_keys=ON")
try:
yield
finally:
self.conn.exec_driver_sql("PRAGMA foreign_keys=OFF")
# as these tests are typically intentional fails, clean out
# tables left over
m = MetaData()
m.reflect(self.conn)
with self.conn.begin():
m.drop_all(self.conn)
def _no_pk_fixture(self):
with self.conn.begin():
nopk = Table(
"nopk",
self.metadata,
Column("a", Integer),
Column("b", Integer),
Column("c", Integer),
mysql_engine="InnoDB",
)
nopk.create(self.conn)
self.conn.execute(
nopk.insert(),
[{"a": 1, "b": 2, "c": 3}, {"a": 2, "b": 4, "c": 5}],
)
return nopk
def _table_w_index_fixture(self):
with self.conn.begin():
t = Table(
"t_w_ix",
self.metadata,
Column("id", Integer, primary_key=True),
Column("thing", Integer),
Column("data", String(20)),
)
Index("ix_thing", t.c.thing)
t.create(self.conn)
return t
def _boolean_fixture(self):
with self.conn.begin():
t = Table(
"hasbool",
self.metadata,
Column("x", Boolean(create_constraint=True, name="ck1")),
Column("y", Integer),
)
t.create(self.conn)
def _timestamp_fixture(self):
with self.conn.begin():
t = Table("hasts", self.metadata, Column("x", DateTime()))
t.create(self.conn)
return t
def _ck_constraint_fixture(self):
with self.conn.begin():
t = Table(
"ck_table",
self.metadata,
Column("id", Integer, nullable=False),
CheckConstraint("id is not NULL", name="ck"),
)
t.create(self.conn)
return t
def _datetime_server_default_fixture(self):
return func.datetime("now", "localtime")
def _timestamp_w_expr_default_fixture(self):
with self.conn.begin():
t = Table(
"hasts",
self.metadata,
Column(
"x",
DateTime(),
server_default=self._datetime_server_default_fixture(),
nullable=False,
),
)
t.create(self.conn)
return t
def _int_to_boolean_fixture(self):
with self.conn.begin():
t = Table("hasbool", self.metadata, Column("x", Integer))
t.create(self.conn)
def test_add_constraint_type(self):
"""test for #1195."""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("q", Boolean(create_constraint=True)))
insp = inspect(self.conn)
assert {
c["type"]._type_affinity
for c in insp.get_columns("foo")
if c["name"] == "q"
}.intersection([Boolean, Integer])
def test_change_type_boolean_to_int(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def test_no_net_change_timestamp(self):
t = self._timestamp_fixture()
import datetime
with self.conn.begin():
self.conn.execute(
t.insert(), {"x": datetime.datetime(2012, 5, 18, 15, 32, 5)}
)
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column("x", type_=DateTime())
eq_(
self.conn.execute(_select(t.c.x)).fetchall(),
[(datetime.datetime(2012, 5, 18, 15, 32, 5),)],
)
def test_no_net_change_timestamp_w_default(self):
t = self._timestamp_w_expr_default_fixture()
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column(
"x",
type_=DateTime(),
nullable=False,
server_default=self._datetime_server_default_fixture(),
)
with self.conn.begin():
self.conn.execute(t.insert())
res = self.conn.execute(_select(t.c.x))
if sqla_14:
assert res.scalar_one_or_none() is not None
else:
row = res.fetchone()
assert row["x"] is not None
def test_drop_col_schematype(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.drop_column(
"x", existing_type=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
assert "x" not in (c["name"] for c in insp.get_columns("hasbool"))
def test_change_type_int_to_boolean(self):
self._int_to_boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x", type_=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
if exclusions.against(config, "sqlite"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Boolean],
)
elif exclusions.against(config, "mysql"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def _assert_data(self, data, tablename="foo"):
res = self.conn.execute(text("select * from %s" % tablename))
if sqla_14:
res = res.mappings()
eq_([dict(row) for row in res], data)
def test_ix_existing(self):
self._table_w_index_fixture()
with self.op.batch_alter_table("t_w_ix") as batch_op:
batch_op.alter_column("data", type_=String(30))
batch_op.create_index("ix_data", ["data"])
insp = inspect(self.conn)
eq_(
{
(ix["name"], tuple(ix["column_names"]))
for ix in insp.get_indexes("t_w_ix")
},
{("ix_data", ("data",)), ("ix_thing", ("thing",))},
)
def test_fk_points_to_me_auto(self):
self._test_fk_points_to_me("auto")
# in particular, this tests that the failures
# on PG and MySQL result in recovery of the batch system,
# e.g. that the _alembic_tmp_temp table is dropped
@config.requirements.no_referential_integrity
def test_fk_points_to_me_recreate(self):
self._test_fk_points_to_me("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_fk_points_to_me_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_fk_points_to_me("auto")
def _test_fk_points_to_me(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("foo", ["id"], ["foo_id"])],
)
def test_selfref_fk_auto(self):
self._test_selfref_fk("auto")
@config.requirements.no_referential_integrity
def test_selfref_fk_recreate(self):
self._test_selfref_fk("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_selfref_fk_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_selfref_fk("auto")
def _test_selfref_fk(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("bar_id", Integer, ForeignKey("bar.id")),
Column("data", String(50)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(
bar.insert(), {"id": 1, "data": "x", "bar_id": None}
)
self.conn.execute(
bar.insert(), {"id": 2, "data": "y", "bar_id": 1}
)
with self.op.batch_alter_table("bar", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("bar", ["id"], ["bar_id"])],
)
def test_change_type(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("data", type_=Integer)
self._assert_data(
[
{"id": 1, "data": 0, "x": 5},
{"id": 2, "data": 22, "x": 6},
{"id": 3, "data": 8, "x": 7},
{"id": 4, "data": 9, "x": 8},
{"id": 5, "data": 0, "x": 9},
]
)
def test_drop_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def test_drop_pk_col_readd_col(self):
# drop a column, add it back without primary_key=True, should no
# longer be in the constraint
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], [])
def test_drop_pk_col_readd_pk_col(self):
# drop a column, add it back with primary_key=True, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer, primary_key=True))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
def test_drop_pk_col_readd_col_also_pk_const(self):
# drop a column, add it back without primary_key=True, but then
# also make anew PK constraint that includes it, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
batch_op.create_primary_key("newpk", ["id"])
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_pk_constraint(self, recreate):
self._no_pk_fixture()
with self.op.batch_alter_table("nopk", recreate=recreate) as batch_op:
batch_op.create_primary_key("newpk", ["a", "b"])
pk_const = inspect(self.conn).get_pk_constraint("nopk")
with config.requirements.reflects_pk_names.fail_if():
eq_(pk_const["name"], "newpk")
eq_(pk_const["constrained_columns"], ["a", "b"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_add_ck_constraint(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_check_constraint("newck", text("x > 0"))
ck_consts = inspect(self.conn).get_check_constraints("foo")
ck_consts[0]["sqltext"] = re.sub(
r"[\'\"`\(\)]", "", ck_consts[0]["sqltext"]
)
for ck in ck_consts:
ck.pop("comment", None)
eq_(ck_consts, [{"sqltext": "x > 0", "name": "newck"}])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint(self, recreate):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate=recreate
) as batch_op:
batch_op.drop_constraint("ck", type_="check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint_legacy_type(self):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate="always"
) as batch_op:
# matches the docs that were written for this originally
batch_op.drop_constraint("ck", "check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.unnamed_constraints
def test_drop_foreign_key(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
naming_convention = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s"
}
with self.op.batch_alter_table(
"bar", naming_convention=naming_convention
) as batch_op:
batch_op.drop_constraint("fk_bar_foo_id_foo", type_="foreignkey")
eq_(inspect(self.conn).get_foreign_keys("bar"), [])
def test_drop_column_fk_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def _assert_table_comment(self, tname, comment):
insp = inspect(self.conn)
tcomment = insp.get_table_comment(tname)
eq_(tcomment, {"text": comment})
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_unique_constraint("newuk", ["x"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x"]}],
)
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq_plus_col(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.add_column(Column("y", Integer))
batch_op.create_unique_constraint("newuk", ["x", "y"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x", "y"]}],
)
@config.requirements.comments
def test_add_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
self._assert_table_comment("foo", "some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment(
"some new comment", existing_comment="some comment"
)
self._assert_table_comment("foo", "some new comment")
@config.requirements.comments
def test_drop_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_table_comment(existing_comment="some comment")
self._assert_table_comment("foo", None)
def _assert_column_comment(self, tname, cname, comment):
insp = inspect(self.conn)
cols = {col["name"]: col for col in insp.get_columns(tname)}
eq_(cols[cname]["comment"], comment)
@config.requirements.comments
def test_add_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_add_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_alter_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column(
"x", existing_type=Integer(), comment="some comment"
)
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
@config.requirements.comments
def test_alter_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.alter_column("x", comment="some comment")
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
def test_rename_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("x", new_column_name="y")
self._assert_data(
[
{"id": 1, "data": "d1", "y": 5},
{"id": 2, "data": "22", "y": 6},
{"id": 3, "data": "8.5", "y": 7},
{"id": 4, "data": "9.46", "y": 8},
{"id": 5, "data": "d5", "y": 9},
]
)
def test_rename_column_boolean(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar") as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
# @config.requirements.check_constraint_reflection
def test_rename_column_boolean_named_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True, name="ck1")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar", recreate="always") as batch_op:
batch_op.alter_column(
"flag",
new_column_name="bflag",
existing_type=Boolean(create_constraint=True, name="ck1"),
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
@config.requirements.non_native_boolean
def test_rename_column_non_native_boolean_no_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
self.conn.execute(
# override Boolean type which as of 1.1 coerces numerics
# to 1/0
text("insert into bar (id, flag) values (:id, :flag)"),
{"id": 3, "flag": 5},
)
with self.op.batch_alter_table(
"bar",
reflect_args=[Column("flag", Boolean(create_constraint=False))],
) as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[
{"id": 1, "bflag": True},
{"id": 2, "bflag": False},
{"id": 3, "bflag": 5},
],
"bar",
)
def test_drop_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
self._assert_data(
[
{"data": "d1", "x": 5},
{"data": "22", "x": 6},
{"data": "8.5", "x": 7},
{"data": "9.46", "x": 8},
{"data": "d5", "x": 9},
]
)
def test_rename_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("id", new_column_name="ident")
self._assert_data(
[
{"ident": 1, "data": "d1", "x": 5},
{"ident": 2, "data": "22", "x": 6},
{"ident": 3, "data": "8.5", "x": 7},
{"ident": 4, "data": "9.46", "x": 8},
{"ident": 5, "data": "d5", "x": 9},
]
)
def test_add_column_auto(self):
# note this uses ALTER
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(config.db).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_add_column_auto_server_default_calculated(self):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2",
DateTime(),
server_default=self._datetime_server_default_fixture(),
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": mock.ANY},
{"id": 2, "data": "22", "x": 6, "data2": mock.ANY},
{"id": 3, "data": "8.5", "x": 7, "data2": mock.ANY},
{"id": 4, "data": "9.46", "x": 8, "data2": mock.ANY},
{"id": 5, "data": "d5", "x": 9, "data2": mock.ANY},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@testing.combinations((True,), (False,))
@testing.exclusions.only_on("sqlite")
@config.requirements.computed_columns
def test_add_column_auto_generated(self, persisted):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2", Integer, Computed("1 + 1", persisted=persisted)
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": 2},
{"id": 2, "data": "22", "x": 6, "data2": 2},
{"id": 3, "data": "8.5", "x": 7, "data2": 2},
{"id": 4, "data": "9.46", "x": 8, "data2": 2},
{"id": 5, "data": "d5", "x": 9, "data2": 2},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@config.requirements.identity_columns
def test_add_column_auto_identity(self):
"""test #883"""
self._no_pk_fixture()
with self.op.batch_alter_table("nopk") as batch_op:
batch_op.add_column(Column("id", Integer, Identity()))
self._assert_data(
[
{"a": 1, "b": 2, "c": 3, "id": 1},
{"a": 2, "b": 4, "c": 5, "id": 2},
],
tablename="nopk",
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x"],
)
def test_add_column_insert_before_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data2", "data", "x"],
)
def test_add_column_insert_after_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_after="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "data2", "x"],
)
def test_add_column_insert_before_raise_on_alter(self):
def go():
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
assert_raises_message(
alembic_exc.CommandError,
"Can't specify insert_before or insert_after when using ALTER",
go,
)
def test_add_column_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_create_drop_index(self):
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
insp = inspect(self.conn)
eq_(
[
dict(
unique=ix["unique"],
name=ix["name"],
column_names=ix["column_names"],
)
for ix in insp.get_indexes("foo")
],
[{"unique": True, "name": "ix_data", "column_names": ["data"]}],
)
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_index("ix_data")
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
class BatchRoundTripMySQLTest(BatchRoundTripTest):
__only_on__ = "mysql", "mariadb"
__backend__ = True
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_rename_column_pk(self):
super().test_rename_column_pk()
@exclusions.fails()
def test_rename_column(self):
super().test_rename_column()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
# fails on mariadb 10.2, succeeds on 10.3
@exclusions.fails_if(config.requirements.mysql_check_col_name_change)
def test_rename_column_boolean(self):
super().test_rename_column_boolean()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
class BatchRoundTripPostgresqlTest(BatchRoundTripTest):
__only_on__ = "postgresql"
__backend__ = True
def _native_boolean_fixture(self):
t = Table(
"has_native_bool",
self.metadata,
Column(
"x",
Boolean(create_constraint=True),
server_default="false",
nullable=False,
),
Column("y", Integer),
)
with self.conn.begin():
t.create(self.conn)
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
@exclusions.fails()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
@exclusions.fails()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_add_col_table_has_native_boolean(self):
self._native_boolean_fixture()
# to ensure test coverage on SQLAlchemy 1.4 and above,
# force the create_constraint flag to True even though it
# defaults to false in 1.4. this test wants to ensure that the
# "should create" rule is consulted
def listen_for_reflect(inspector, table, column_info):
if isinstance(column_info["type"], Boolean):
column_info["type"].create_constraint = True
with self.op.batch_alter_table(
"has_native_bool",
recreate="always",
reflect_kwargs={
"listeners": [("column_reflect", listen_for_reflect)]
},
) as batch_op:
batch_op.add_column(Column("data", Integer))
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "data"
],
[Integer],
)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "x"
],
[Boolean],
)
class OfflineTest(TestBase):
@testing.fixture
def no_reflect_batch_fixture(self):
staging_env()
def go():
self.cfg = cfg = _no_sql_testing_config(dialect="sqlite")
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String, Table, MetaData
some_table_up = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('bar', String)
)
some_table_down = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('foo', Integer)
)
def upgrade():
with op.batch_alter_table("some_table", copy_from=some_table_up) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table", copy_from=some_table_down) as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
""" # noqa: E501
% a,
)
yield go
clear_staging_env()
@testing.fixture
def batch_fixture(self):
staging_env()
def go(dialect):
self.cfg = cfg = _no_sql_testing_config(dialect=dialect)
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String
def upgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
"""
% a,
)
yield go
clear_staging_env()
def test_upgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"ALTER TABLE some_table ADD COLUMN foo INTEGER", buf.getvalue()
)
def test_downgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"ALTER TABLE some_table DROP COLUMN foo", buf.getvalue()
)
def test_upgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.upgrade(self.cfg, self.a, sql=True)
def test_downgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
def test_upgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
def test_downgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | I'll look into it | CaselIT | 7 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_batch.py | from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import JSON
from sqlalchemy import MetaData
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import Text
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects import sqlite as sqlite_dialect
from sqlalchemy.schema import CreateIndex
from sqlalchemy.schema import CreateTable
from sqlalchemy.sql import column
from sqlalchemy.sql import text
from alembic import command
from alembic import testing
from alembic import util
from alembic.ddl import sqlite
from alembic.operations import Operations
from alembic.operations.batch import ApplyBatchImpl
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import exclusions
from alembic.testing import expect_raises_message
from alembic.testing import is_
from alembic.testing import mock
from alembic.testing import TestBase
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import op_fixture
from alembic.util import CommandError
from alembic.util import exc as alembic_exc
from alembic.util.sqla_compat import _NONE_NAME
from alembic.util.sqla_compat import _safe_commit_connection_transaction
from alembic.util.sqla_compat import _select
from alembic.util.sqla_compat import has_computed
from alembic.util.sqla_compat import has_identity
from alembic.util.sqla_compat import sqla_14
if has_computed:
from alembic.util.sqla_compat import Computed
if has_identity:
from alembic.util.sqla_compat import Identity
class BatchApplyTest(TestBase):
def setUp(self):
self.op = Operations(mock.Mock(opts={}))
self.impl = sqlite.SQLiteImpl(
sqlite_dialect.dialect(), None, False, False, None, {}
)
def _simple_fixture(self, table_args=(), table_kwargs={}, **kw):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String(10)),
Column("y", Integer),
)
return ApplyBatchImpl(
self.impl, t, table_args, table_kwargs, False, **kw
)
def _uq_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
UniqueConstraint("y", name="uq1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_table_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
CheckConstraint("y > 5", name="ck1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_col_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer, CheckConstraint("y > 5", name="ck1")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _ix_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
Index("ix1", "y"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _pk_fixture(self):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer),
Column("x", String()),
Column("y", Integer),
PrimaryKeyConstraint("id", name="mypk"),
)
return ApplyBatchImpl(self.impl, t, (), {}, False)
def _literal_ck_fixture(
self, copy_from=None, table_args=(), table_kwargs={}
):
m = MetaData()
if copy_from is not None:
t = copy_from
else:
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
CheckConstraint("email LIKE '%@%'"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _sql_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
)
t.append_constraint(CheckConstraint(t.c.email.like("%@%")))
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _multi_fk_fixture(self, table_args=(), table_kwargs={}, schema=None):
m = MetaData()
if schema:
schemaarg = "%s." % schema
else:
schemaarg = ""
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id_1", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_2", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_3", Integer),
Column("user_id_version", Integer),
ForeignKeyConstraint(
["user_id_3", "user_id_version"],
["%suser.id" % schemaarg, "%suser.id_version" % schemaarg],
),
schema=schema,
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id", name="ufk")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _selfref_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("parent_id", Integer, ForeignKey("tname.id")),
Column("data", String),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_no_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _enum_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", Enum("a", "b", "c", create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _server_default_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", String(), server_default=""),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _assert_impl(
self,
impl,
colnames=None,
ddl_contains=None,
ddl_not_contains=None,
dialect="default",
schema=None,
):
context = op_fixture(dialect=dialect)
impl._create(context.impl)
if colnames is None:
colnames = ["id", "x", "y"]
eq_(impl.new_table.c.keys(), colnames)
pk_cols = [col for col in impl.new_table.c if col.primary_key]
eq_(list(impl.new_table.primary_key), pk_cols)
create_stmt = str(
CreateTable(impl.new_table).compile(dialect=context.dialect)
)
create_stmt = re.sub(r"[\n\t]", "", create_stmt)
idx_stmt = ""
# create indexes; these should be created in terms of the
# final table name
impl.new_table.name = impl.table.name
for idx in impl._gather_indexes_from_both_tables():
idx_stmt += str(CreateIndex(idx).compile(dialect=context.dialect))
idx_stmt = re.sub(r"[\n\t]", "", idx_stmt)
# revert new table name to the temp name, assertions below
# are looking for the temp name
impl.new_table.name = ApplyBatchImpl._calc_temp_name(impl.table.name)
if ddl_contains:
assert ddl_contains in create_stmt + idx_stmt
if ddl_not_contains:
assert ddl_not_contains not in create_stmt + idx_stmt
expected = [create_stmt]
if schema:
args = {"schema": "%s." % schema}
else:
args = {"schema": ""}
args["temp_name"] = impl.new_table.name
args["colnames"] = ", ".join(
[
impl.new_table.c[name].name
for name in colnames
if name in impl.table.c
]
)
args["tname_colnames"] = ", ".join(
"CAST(%(schema)stname.%(name)s AS %(type)s) AS %(cast_label)s"
% {
"schema": args["schema"],
"name": name,
"type": impl.new_table.c[name].type,
"cast_label": name if sqla_14 else "anon_1",
}
if (
impl.new_table.c[name].type._type_affinity
is not impl.table.c[name].type._type_affinity
)
else "%(schema)stname.%(name)s"
% {"schema": args["schema"], "name": name}
for name in colnames
if name in impl.table.c
)
expected.extend(
[
"INSERT INTO %(schema)s%(temp_name)s (%(colnames)s) "
"SELECT %(tname_colnames)s FROM %(schema)stname" % args,
"DROP TABLE %(schema)stname" % args,
"ALTER TABLE %(schema)s%(temp_name)s "
"RENAME TO %(schema)stname" % args,
]
)
if idx_stmt:
expected.append(idx_stmt)
context.assert_(*expected)
return impl.new_table
def test_change_type(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", type_=String)
new_table = self._assert_impl(impl)
assert new_table.c.x.type._type_affinity is String
def test_rename_col(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", name="q")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.name, "q")
def test_rename_col_w_index(self):
impl = self._ix_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(
impl, ddl_contains="CREATE INDEX ix1 ON tname (y2)"
)
eq_(new_table.c.y.name, "y2")
def test_rename_col_w_uq(self):
impl = self._uq_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(impl, ddl_contains="UNIQUE (y2)")
eq_(new_table.c.y.name, "y2")
def test_alter_column_comment(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", comment="some comment")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.comment, "some comment")
def test_add_column_comment(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("q", Integer, comment="some comment"))
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "q"])
eq_(new_table.c.q.comment, "some comment")
def test_rename_col_boolean(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (bflag IN (0, 1)",
colnames=["id", "flag"],
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_change_type_schematype_to_non(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", type_=Integer)
new_table = self._assert_impl(
impl, colnames=["id", "flag"], ddl_not_contains="CHECK"
)
assert new_table.c.flag.type._type_affinity is Integer
# NOTE: we can't do test_change_type_non_to_schematype
# at this level because the "add_constraint" part of this
# comes from toimpl.py, which we aren't testing here
def test_rename_col_boolean_no_ck(self):
impl = self._boolean_no_ck_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl, ddl_not_contains="CHECK", colnames=["id", "flag"]
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
0,
)
def test_rename_col_enum(self):
impl = self._enum_fixture()
impl.alter_column("tname", "thing", name="thang")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (thang IN ('a', 'b', 'c')",
colnames=["id", "thing"],
)
eq_(new_table.c.thing.name, "thang")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_rename_col_literal_ck(self):
impl = self._literal_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
# note this is wrong, we don't dig into the SQL
impl,
ddl_contains="CHECK (email LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_literal_ck_workaround(self):
impl = self._literal_ck_fixture(
copy_from=Table(
"tname",
MetaData(),
Column("id", Integer, primary_key=True),
Column("email", String),
),
table_args=[CheckConstraint("emol LIKE '%@%'")],
)
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_sql_ck(self):
impl = self._sql_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_add_col(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("x", "id", "y")])
new_table = self._assert_impl(impl, colnames=["x", "id", "y"])
eq_(new_table.c.x.name, "x")
def test_add_col_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("id", "x", "g", "y")])
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col, insert_before="x")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_beginning(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="id")
new_table = self._assert_impl(impl, colnames=["g", "id", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_penultimate(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="x")
self._assert_impl(impl, colnames=["id", "x", "g", "y"])
def test_add_col_insert_after_end(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_plus_no_order(self):
impl = self._simple_fixture()
# operations.add_column produces a table
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer))
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_no_order_plus_insert_after(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", Column("q", Integer))
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_after="g")
new_table = self._assert_impl(
impl, colnames=["id", "g", "q", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_before="g")
new_table = self._assert_impl(
impl, colnames=["id", "q", "g", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_server_default(self):
impl = self._simple_fixture()
impl.alter_column("tname", "y", server_default="10")
new_table = self._assert_impl(impl, ddl_contains="DEFAULT '10'")
eq_(new_table.c.y.server_default.arg, "10")
def test_drop_server_default(self):
impl = self._server_default_fixture()
impl.alter_column("tname", "thing", server_default=None)
new_table = self._assert_impl(
impl, colnames=["id", "thing"], ddl_not_contains="DEFAULT"
)
eq_(new_table.c.thing.server_default, None)
def test_rename_col_pk(self):
impl = self._simple_fixture()
impl.alter_column("tname", "id", name="foobar")
new_table = self._assert_impl(
impl, ddl_contains="PRIMARY KEY (foobar)"
)
eq_(new_table.c.id.name, "foobar")
eq_(list(new_table.primary_key), [new_table.c.id])
def test_rename_col_fk(self):
impl = self._fk_fixture()
impl.alter_column("tname", "user_id", name="foobar")
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_contains='FOREIGN KEY(foobar) REFERENCES "user" (id)',
)
eq_(new_table.c.user_id.name, "foobar")
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_regen_multi_fk(self):
impl = self._multi_fk_fixture()
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES "user" (id, id_version)',
)
def test_regen_multi_fk_schema(self):
impl = self._multi_fk_fixture(schema="foo_schema")
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES foo_schema."user" (id, id_version)',
schema="foo_schema",
)
def test_do_not_add_existing_columns_columns(self):
impl = self._multi_fk_fixture()
meta = impl.table.metadata
cid = Column("id", Integer())
user = Table("user", meta, cid)
fk = [
c
for c in impl.unnamed_constraints
if isinstance(c, ForeignKeyConstraint)
]
impl._setup_referent(meta, fk[0])
is_(user.c.id, cid)
def test_drop_col(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("x"))
new_table = self._assert_impl(impl, colnames=["id", "y"])
assert "y" in new_table.c
assert "x" not in new_table.c
def test_drop_col_remove_pk(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("id"))
new_table = self._assert_impl(
impl, colnames=["x", "y"], ddl_not_contains="PRIMARY KEY"
)
assert "y" in new_table.c
assert "id" not in new_table.c
assert not new_table.primary_key
def test_drop_col_remove_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("user_id"))
new_table = self._assert_impl(
impl, colnames=["id", "email"], ddl_not_contains="FOREIGN KEY"
)
assert "user_id" not in new_table.c
assert not new_table.foreign_keys
def test_drop_col_retain_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("email"))
new_table = self._assert_impl(
impl,
colnames=["id", "user_id"],
ddl_contains='FOREIGN KEY(user_id) REFERENCES "user" (id)',
)
assert "email" not in new_table.c
assert new_table.c.user_id.foreign_keys
def test_drop_col_retain_fk_selfref(self):
impl = self._selfref_fk_fixture()
impl.drop_column("tname", column("data"))
new_table = self._assert_impl(impl, colnames=["id", "parent_id"])
assert "data" not in new_table.c
assert new_table.c.parent_id.foreign_keys
def test_add_fk(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("user_id", Integer))
fk = self.op.schema_obj.foreign_key_constraint(
"fk1", "tname", "user", ["user_id"], ["id"]
)
impl.add_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "x", "y", "user_id"],
ddl_contains="CONSTRAINT fk1 FOREIGN KEY(user_id) "
'REFERENCES "user" (id)',
)
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_drop_fk(self):
impl = self._named_fk_fixture()
fk = ForeignKeyConstraint([], [], name="ufk")
impl.drop_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_not_contains="CONSTRANT fk1",
)
eq_(list(new_table.foreign_keys), [])
def test_add_uq(self):
impl = self._simple_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.add_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT uq1 UNIQUE",
)
def test_drop_uq(self):
impl = self._uq_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.drop_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_ck_unnamed(self):
"""test for #1195"""
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint(_NONE_NAME, "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CHECK (y > 5)",
)
def test_add_ck(self):
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_table(self):
impl = self._named_ck_table_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_col(self):
impl = self._named_ck_col_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_create_index(self):
impl = self._simple_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.create_index(ix)
self._assert_impl(
impl, colnames=["id", "x", "y"], ddl_contains="CREATE INDEX ix1"
)
def test_drop_index(self):
impl = self._ix_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.drop_index(ix)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_table_opts(self):
impl = self._simple_fixture(table_kwargs={"mysql_engine": "InnoDB"})
self._assert_impl(impl, ddl_contains="ENGINE=InnoDB", dialect="mysql")
def test_drop_pk(self):
impl = self._pk_fixture()
pk = self.op.schema_obj.primary_key_constraint("mypk", "tname", ["id"])
impl.drop_constraint(pk)
new_table = self._assert_impl(impl)
assert not new_table.c.id.primary_key
assert not len(new_table.primary_key)
class BatchAPITest(TestBase):
@contextmanager
def _fixture(self, schema=None):
migration_context = mock.Mock(
opts={},
impl=mock.MagicMock(__dialect__="sqlite", connection=object()),
)
op = Operations(migration_context)
batch = op.batch_alter_table(
"tname", recreate="never", schema=schema
).__enter__()
mock_schema = mock.MagicMock()
with mock.patch("alembic.operations.schemaobj.sa_schema", mock_schema):
yield batch
batch.impl.flush()
self.mock_schema = mock_schema
def test_drop_col(self):
with self._fixture() as batch:
batch.drop_column("q")
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.drop_column(
"tname", self.mock_schema.Column(), schema=None
)
],
)
def test_add_col(self):
column = Column("w", String(50))
with self._fixture() as batch:
batch.add_column(column)
assert (
mock.call.add_column("tname", column, schema=None)
in batch.impl.operations.impl.mock_calls
)
def test_create_fk(self):
with self._fixture() as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_fk_schema(self):
with self._fixture(schema="foo") as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema="foo",
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_uq(self):
with self._fixture() as batch:
batch.create_unique_constraint("uq1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.UniqueConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="uq1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.UniqueConstraint())],
)
def test_create_pk(self):
with self._fixture() as batch:
batch.create_primary_key("pk1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.PrimaryKeyConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="pk1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.PrimaryKeyConstraint()
)
],
)
def test_create_check(self):
expr = text("a > b")
with self._fixture() as batch:
batch.create_check_constraint("ck1", expr)
eq_(
self.mock_schema.CheckConstraint.mock_calls,
[mock.call(expr, name="ck1")],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.CheckConstraint())],
)
def test_drop_constraint(self):
with self._fixture() as batch:
batch.drop_constraint("uq1")
eq_(self.mock_schema.Constraint.mock_calls, [mock.call(name="uq1")])
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.drop_constraint(self.mock_schema.Constraint())],
)
class CopyFromTest(TestBase):
def _fixture(self):
self.metadata = MetaData()
self.table = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
)
context = op_fixture(dialect="sqlite", as_sql=True)
self.op = Operations(context)
return context
def test_change_type(self):
context = self._fixture()
self.table.append_column(Column("toj", Text))
self.table.append_column(Column("fromj", JSON))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.alter_column("toj", type_=JSON)
batch_op.alter_column("fromj", type_=Text)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, toj JSON, fromj TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, toj, fromj) "
"SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x, foo.toj, "
"CAST(foo.fromj AS TEXT) AS %s FROM foo"
% (
("data" if sqla_14 else "anon_1"),
("fromj" if sqla_14 else "anon_2"),
),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_from_schematype(self):
context = self._fixture()
self.table.append_column(
Column("y", Boolean(create_constraint=True, name="ck1"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS INTEGER) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_name_from_existing_variant_type(self):
"""test #982"""
context = self._fixture()
self.table.append_column(
Column("y", Text().with_variant(Text(10000), "mysql"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
column_name="y",
new_column_name="q",
existing_type=Text().with_variant(Text(10000), "mysql"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, q TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, q) "
"SELECT foo.id, foo.data, foo.x, foo.y FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_to_schematype(self):
context = self._fixture()
self.table.append_column(Column("y", Integer))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
existing_type=Integer,
type_=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y BOOLEAN, PRIMARY KEY (id), "
"CONSTRAINT ck1 CHECK (y IN (0, 1)))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS BOOLEAN) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_w_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), "
"x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_wo_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_("CREATE UNIQUE INDEX ix_data ON foo (data)")
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_("DROP INDEX ix_data")
def test_create_drop_index_w_other_ops(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x FROM foo"
% (("data" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
batch_op.alter_column("data", type_=String)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
class BatchRoundTripTest(TestBase):
__only_on__ = "sqlite"
def setUp(self):
self.conn = config.db.connect()
self.metadata = MetaData()
t1 = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
mysql_engine="InnoDB",
)
with self.conn.begin():
t1.create(self.conn)
self.conn.execute(
t1.insert(),
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
],
)
context = MigrationContext.configure(self.conn)
self.op = Operations(context)
def tearDown(self):
# why commit? because SQLite has inconsistent treatment
# of transactional DDL. A test that runs CREATE TABLE and then
# ALTER TABLE to change the name of that table, will end up
# committing the CREATE TABLE but not the ALTER. As batch mode
# does this with a temp table name that's not even in the
# metadata collection, we don't have an explicit drop for it
# (though we could do that too). calling commit means the
# ALTER will go through and the drop_all() will then catch it.
_safe_commit_connection_transaction(self.conn)
with self.conn.begin():
self.metadata.drop_all(self.conn)
self.conn.close()
@contextmanager
def _sqlite_referential_integrity(self):
self.conn.exec_driver_sql("PRAGMA foreign_keys=ON")
try:
yield
finally:
self.conn.exec_driver_sql("PRAGMA foreign_keys=OFF")
# as these tests are typically intentional fails, clean out
# tables left over
m = MetaData()
m.reflect(self.conn)
with self.conn.begin():
m.drop_all(self.conn)
def _no_pk_fixture(self):
with self.conn.begin():
nopk = Table(
"nopk",
self.metadata,
Column("a", Integer),
Column("b", Integer),
Column("c", Integer),
mysql_engine="InnoDB",
)
nopk.create(self.conn)
self.conn.execute(
nopk.insert(),
[{"a": 1, "b": 2, "c": 3}, {"a": 2, "b": 4, "c": 5}],
)
return nopk
def _table_w_index_fixture(self):
with self.conn.begin():
t = Table(
"t_w_ix",
self.metadata,
Column("id", Integer, primary_key=True),
Column("thing", Integer),
Column("data", String(20)),
)
Index("ix_thing", t.c.thing)
t.create(self.conn)
return t
def _boolean_fixture(self):
with self.conn.begin():
t = Table(
"hasbool",
self.metadata,
Column("x", Boolean(create_constraint=True, name="ck1")),
Column("y", Integer),
)
t.create(self.conn)
def _timestamp_fixture(self):
with self.conn.begin():
t = Table("hasts", self.metadata, Column("x", DateTime()))
t.create(self.conn)
return t
def _ck_constraint_fixture(self):
with self.conn.begin():
t = Table(
"ck_table",
self.metadata,
Column("id", Integer, nullable=False),
CheckConstraint("id is not NULL", name="ck"),
)
t.create(self.conn)
return t
def _datetime_server_default_fixture(self):
return func.datetime("now", "localtime")
def _timestamp_w_expr_default_fixture(self):
with self.conn.begin():
t = Table(
"hasts",
self.metadata,
Column(
"x",
DateTime(),
server_default=self._datetime_server_default_fixture(),
nullable=False,
),
)
t.create(self.conn)
return t
def _int_to_boolean_fixture(self):
with self.conn.begin():
t = Table("hasbool", self.metadata, Column("x", Integer))
t.create(self.conn)
def test_add_constraint_type(self):
"""test for #1195."""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("q", Boolean(create_constraint=True)))
insp = inspect(self.conn)
assert {
c["type"]._type_affinity
for c in insp.get_columns("foo")
if c["name"] == "q"
}.intersection([Boolean, Integer])
def test_change_type_boolean_to_int(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def test_no_net_change_timestamp(self):
t = self._timestamp_fixture()
import datetime
with self.conn.begin():
self.conn.execute(
t.insert(), {"x": datetime.datetime(2012, 5, 18, 15, 32, 5)}
)
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column("x", type_=DateTime())
eq_(
self.conn.execute(_select(t.c.x)).fetchall(),
[(datetime.datetime(2012, 5, 18, 15, 32, 5),)],
)
def test_no_net_change_timestamp_w_default(self):
t = self._timestamp_w_expr_default_fixture()
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column(
"x",
type_=DateTime(),
nullable=False,
server_default=self._datetime_server_default_fixture(),
)
with self.conn.begin():
self.conn.execute(t.insert())
res = self.conn.execute(_select(t.c.x))
if sqla_14:
assert res.scalar_one_or_none() is not None
else:
row = res.fetchone()
assert row["x"] is not None
def test_drop_col_schematype(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.drop_column(
"x", existing_type=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
assert "x" not in (c["name"] for c in insp.get_columns("hasbool"))
def test_change_type_int_to_boolean(self):
self._int_to_boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x", type_=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
if exclusions.against(config, "sqlite"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Boolean],
)
elif exclusions.against(config, "mysql"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def _assert_data(self, data, tablename="foo"):
res = self.conn.execute(text("select * from %s" % tablename))
if sqla_14:
res = res.mappings()
eq_([dict(row) for row in res], data)
def test_ix_existing(self):
self._table_w_index_fixture()
with self.op.batch_alter_table("t_w_ix") as batch_op:
batch_op.alter_column("data", type_=String(30))
batch_op.create_index("ix_data", ["data"])
insp = inspect(self.conn)
eq_(
{
(ix["name"], tuple(ix["column_names"]))
for ix in insp.get_indexes("t_w_ix")
},
{("ix_data", ("data",)), ("ix_thing", ("thing",))},
)
def test_fk_points_to_me_auto(self):
self._test_fk_points_to_me("auto")
# in particular, this tests that the failures
# on PG and MySQL result in recovery of the batch system,
# e.g. that the _alembic_tmp_temp table is dropped
@config.requirements.no_referential_integrity
def test_fk_points_to_me_recreate(self):
self._test_fk_points_to_me("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_fk_points_to_me_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_fk_points_to_me("auto")
def _test_fk_points_to_me(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("foo", ["id"], ["foo_id"])],
)
def test_selfref_fk_auto(self):
self._test_selfref_fk("auto")
@config.requirements.no_referential_integrity
def test_selfref_fk_recreate(self):
self._test_selfref_fk("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_selfref_fk_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_selfref_fk("auto")
def _test_selfref_fk(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("bar_id", Integer, ForeignKey("bar.id")),
Column("data", String(50)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(
bar.insert(), {"id": 1, "data": "x", "bar_id": None}
)
self.conn.execute(
bar.insert(), {"id": 2, "data": "y", "bar_id": 1}
)
with self.op.batch_alter_table("bar", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("bar", ["id"], ["bar_id"])],
)
def test_change_type(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("data", type_=Integer)
self._assert_data(
[
{"id": 1, "data": 0, "x": 5},
{"id": 2, "data": 22, "x": 6},
{"id": 3, "data": 8, "x": 7},
{"id": 4, "data": 9, "x": 8},
{"id": 5, "data": 0, "x": 9},
]
)
def test_drop_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def test_drop_pk_col_readd_col(self):
# drop a column, add it back without primary_key=True, should no
# longer be in the constraint
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], [])
def test_drop_pk_col_readd_pk_col(self):
# drop a column, add it back with primary_key=True, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer, primary_key=True))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
def test_drop_pk_col_readd_col_also_pk_const(self):
# drop a column, add it back without primary_key=True, but then
# also make anew PK constraint that includes it, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
batch_op.create_primary_key("newpk", ["id"])
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_pk_constraint(self, recreate):
self._no_pk_fixture()
with self.op.batch_alter_table("nopk", recreate=recreate) as batch_op:
batch_op.create_primary_key("newpk", ["a", "b"])
pk_const = inspect(self.conn).get_pk_constraint("nopk")
with config.requirements.reflects_pk_names.fail_if():
eq_(pk_const["name"], "newpk")
eq_(pk_const["constrained_columns"], ["a", "b"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_add_ck_constraint(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_check_constraint("newck", text("x > 0"))
ck_consts = inspect(self.conn).get_check_constraints("foo")
ck_consts[0]["sqltext"] = re.sub(
r"[\'\"`\(\)]", "", ck_consts[0]["sqltext"]
)
for ck in ck_consts:
ck.pop("comment", None)
eq_(ck_consts, [{"sqltext": "x > 0", "name": "newck"}])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint(self, recreate):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate=recreate
) as batch_op:
batch_op.drop_constraint("ck", type_="check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint_legacy_type(self):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate="always"
) as batch_op:
# matches the docs that were written for this originally
batch_op.drop_constraint("ck", "check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.unnamed_constraints
def test_drop_foreign_key(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
naming_convention = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s"
}
with self.op.batch_alter_table(
"bar", naming_convention=naming_convention
) as batch_op:
batch_op.drop_constraint("fk_bar_foo_id_foo", type_="foreignkey")
eq_(inspect(self.conn).get_foreign_keys("bar"), [])
def test_drop_column_fk_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def _assert_table_comment(self, tname, comment):
insp = inspect(self.conn)
tcomment = insp.get_table_comment(tname)
eq_(tcomment, {"text": comment})
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_unique_constraint("newuk", ["x"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x"]}],
)
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq_plus_col(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.add_column(Column("y", Integer))
batch_op.create_unique_constraint("newuk", ["x", "y"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x", "y"]}],
)
@config.requirements.comments
def test_add_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
self._assert_table_comment("foo", "some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment(
"some new comment", existing_comment="some comment"
)
self._assert_table_comment("foo", "some new comment")
@config.requirements.comments
def test_drop_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_table_comment(existing_comment="some comment")
self._assert_table_comment("foo", None)
def _assert_column_comment(self, tname, cname, comment):
insp = inspect(self.conn)
cols = {col["name"]: col for col in insp.get_columns(tname)}
eq_(cols[cname]["comment"], comment)
@config.requirements.comments
def test_add_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_add_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_alter_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column(
"x", existing_type=Integer(), comment="some comment"
)
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
@config.requirements.comments
def test_alter_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.alter_column("x", comment="some comment")
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
def test_rename_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("x", new_column_name="y")
self._assert_data(
[
{"id": 1, "data": "d1", "y": 5},
{"id": 2, "data": "22", "y": 6},
{"id": 3, "data": "8.5", "y": 7},
{"id": 4, "data": "9.46", "y": 8},
{"id": 5, "data": "d5", "y": 9},
]
)
def test_rename_column_boolean(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar") as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
# @config.requirements.check_constraint_reflection
def test_rename_column_boolean_named_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True, name="ck1")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar", recreate="always") as batch_op:
batch_op.alter_column(
"flag",
new_column_name="bflag",
existing_type=Boolean(create_constraint=True, name="ck1"),
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
@config.requirements.non_native_boolean
def test_rename_column_non_native_boolean_no_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
self.conn.execute(
# override Boolean type which as of 1.1 coerces numerics
# to 1/0
text("insert into bar (id, flag) values (:id, :flag)"),
{"id": 3, "flag": 5},
)
with self.op.batch_alter_table(
"bar",
reflect_args=[Column("flag", Boolean(create_constraint=False))],
) as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[
{"id": 1, "bflag": True},
{"id": 2, "bflag": False},
{"id": 3, "bflag": 5},
],
"bar",
)
def test_drop_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
self._assert_data(
[
{"data": "d1", "x": 5},
{"data": "22", "x": 6},
{"data": "8.5", "x": 7},
{"data": "9.46", "x": 8},
{"data": "d5", "x": 9},
]
)
def test_rename_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("id", new_column_name="ident")
self._assert_data(
[
{"ident": 1, "data": "d1", "x": 5},
{"ident": 2, "data": "22", "x": 6},
{"ident": 3, "data": "8.5", "x": 7},
{"ident": 4, "data": "9.46", "x": 8},
{"ident": 5, "data": "d5", "x": 9},
]
)
def test_add_column_auto(self):
# note this uses ALTER
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(config.db).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_add_column_auto_server_default_calculated(self):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2",
DateTime(),
server_default=self._datetime_server_default_fixture(),
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": mock.ANY},
{"id": 2, "data": "22", "x": 6, "data2": mock.ANY},
{"id": 3, "data": "8.5", "x": 7, "data2": mock.ANY},
{"id": 4, "data": "9.46", "x": 8, "data2": mock.ANY},
{"id": 5, "data": "d5", "x": 9, "data2": mock.ANY},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@testing.combinations((True,), (False,))
@testing.exclusions.only_on("sqlite")
@config.requirements.computed_columns
def test_add_column_auto_generated(self, persisted):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2", Integer, Computed("1 + 1", persisted=persisted)
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": 2},
{"id": 2, "data": "22", "x": 6, "data2": 2},
{"id": 3, "data": "8.5", "x": 7, "data2": 2},
{"id": 4, "data": "9.46", "x": 8, "data2": 2},
{"id": 5, "data": "d5", "x": 9, "data2": 2},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@config.requirements.identity_columns
def test_add_column_auto_identity(self):
"""test #883"""
self._no_pk_fixture()
with self.op.batch_alter_table("nopk") as batch_op:
batch_op.add_column(Column("id", Integer, Identity()))
self._assert_data(
[
{"a": 1, "b": 2, "c": 3, "id": 1},
{"a": 2, "b": 4, "c": 5, "id": 2},
],
tablename="nopk",
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x"],
)
def test_add_column_insert_before_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data2", "data", "x"],
)
def test_add_column_insert_after_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_after="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "data2", "x"],
)
def test_add_column_insert_before_raise_on_alter(self):
def go():
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
assert_raises_message(
alembic_exc.CommandError,
"Can't specify insert_before or insert_after when using ALTER",
go,
)
def test_add_column_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_create_drop_index(self):
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
insp = inspect(self.conn)
eq_(
[
dict(
unique=ix["unique"],
name=ix["name"],
column_names=ix["column_names"],
)
for ix in insp.get_indexes("foo")
],
[{"unique": True, "name": "ix_data", "column_names": ["data"]}],
)
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_index("ix_data")
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
class BatchRoundTripMySQLTest(BatchRoundTripTest):
__only_on__ = "mysql", "mariadb"
__backend__ = True
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_rename_column_pk(self):
super().test_rename_column_pk()
@exclusions.fails()
def test_rename_column(self):
super().test_rename_column()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
# fails on mariadb 10.2, succeeds on 10.3
@exclusions.fails_if(config.requirements.mysql_check_col_name_change)
def test_rename_column_boolean(self):
super().test_rename_column_boolean()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
class BatchRoundTripPostgresqlTest(BatchRoundTripTest):
__only_on__ = "postgresql"
__backend__ = True
def _native_boolean_fixture(self):
t = Table(
"has_native_bool",
self.metadata,
Column(
"x",
Boolean(create_constraint=True),
server_default="false",
nullable=False,
),
Column("y", Integer),
)
with self.conn.begin():
t.create(self.conn)
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
@exclusions.fails()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
@exclusions.fails()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_add_col_table_has_native_boolean(self):
self._native_boolean_fixture()
# to ensure test coverage on SQLAlchemy 1.4 and above,
# force the create_constraint flag to True even though it
# defaults to false in 1.4. this test wants to ensure that the
# "should create" rule is consulted
def listen_for_reflect(inspector, table, column_info):
if isinstance(column_info["type"], Boolean):
column_info["type"].create_constraint = True
with self.op.batch_alter_table(
"has_native_bool",
recreate="always",
reflect_kwargs={
"listeners": [("column_reflect", listen_for_reflect)]
},
) as batch_op:
batch_op.add_column(Column("data", Integer))
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "data"
],
[Integer],
)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "x"
],
[Boolean],
)
class OfflineTest(TestBase):
@testing.fixture
def no_reflect_batch_fixture(self):
staging_env()
def go():
self.cfg = cfg = _no_sql_testing_config(dialect="sqlite")
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String, Table, MetaData
some_table_up = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('bar', String)
)
some_table_down = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('foo', Integer)
)
def upgrade():
with op.batch_alter_table("some_table", copy_from=some_table_up) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table", copy_from=some_table_down) as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
""" # noqa: E501
% a,
)
yield go
clear_staging_env()
@testing.fixture
def batch_fixture(self):
staging_env()
def go(dialect):
self.cfg = cfg = _no_sql_testing_config(dialect=dialect)
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String
def upgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
"""
% a,
)
yield go
clear_staging_env()
def test_upgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"ALTER TABLE some_table ADD COLUMN foo INTEGER", buf.getvalue()
)
def test_downgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"ALTER TABLE some_table DROP COLUMN foo", buf.getvalue()
)
def test_upgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.upgrade(self.cfg, self.a, sql=True)
def test_downgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
def test_upgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
def test_downgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
| from contextlib import contextmanager
import re
from sqlalchemy import Boolean
from sqlalchemy import CheckConstraint
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import Enum
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import JSON
from sqlalchemy import MetaData
from sqlalchemy import PrimaryKeyConstraint
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import Text
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects import sqlite as sqlite_dialect
from sqlalchemy.schema import CreateIndex
from sqlalchemy.schema import CreateTable
from sqlalchemy.sql import column
from sqlalchemy.sql import text
from alembic import command
from alembic import testing
from alembic import util
from alembic.ddl import sqlite
from alembic.operations import Operations
from alembic.operations.batch import ApplyBatchImpl
from alembic.runtime.migration import MigrationContext
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import exclusions
from alembic.testing import expect_raises_message
from alembic.testing import is_
from alembic.testing import mock
from alembic.testing import TestBase
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import op_fixture
from alembic.util import CommandError
from alembic.util import exc as alembic_exc
from alembic.util.sqla_compat import _NONE_NAME
from alembic.util.sqla_compat import _safe_commit_connection_transaction
from alembic.util.sqla_compat import _select
from alembic.util.sqla_compat import has_computed
from alembic.util.sqla_compat import has_identity
from alembic.util.sqla_compat import sqla_14
if has_computed:
from alembic.util.sqla_compat import Computed
if has_identity:
from alembic.util.sqla_compat import Identity
class BatchApplyTest(TestBase):
def setUp(self):
self.op = Operations(mock.Mock(opts={}))
self.impl = sqlite.SQLiteImpl(
sqlite_dialect.dialect(), None, False, False, None, {}
)
def _simple_fixture(self, table_args=(), table_kwargs={}, **kw):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String(10)),
Column("y", Integer),
)
return ApplyBatchImpl(
self.impl, t, table_args, table_kwargs, False, **kw
)
def _uq_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
UniqueConstraint("y", name="uq1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_table_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
CheckConstraint("y > 5", name="ck1"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_ck_col_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer, CheckConstraint("y > 5", name="ck1")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _ix_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("x", String()),
Column("y", Integer),
Index("ix1", "y"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _pk_fixture(self):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer),
Column("x", String()),
Column("y", Integer),
PrimaryKeyConstraint("id", name="mypk"),
)
return ApplyBatchImpl(self.impl, t, (), {}, False)
def _literal_ck_fixture(
self, copy_from=None, table_args=(), table_kwargs={}
):
m = MetaData()
if copy_from is not None:
t = copy_from
else:
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
CheckConstraint("email LIKE '%@%'"),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _sql_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
)
t.append_constraint(CheckConstraint(t.c.email.like("%@%")))
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _multi_fk_fixture(self, table_args=(), table_kwargs={}, schema=None):
m = MetaData()
if schema:
schemaarg = "%s." % schema
else:
schemaarg = ""
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id_1", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_2", Integer, ForeignKey("%suser.id" % schemaarg)),
Column("user_id_3", Integer),
Column("user_id_version", Integer),
ForeignKeyConstraint(
["user_id_3", "user_id_version"],
["%suser.id" % schemaarg, "%suser.id_version" % schemaarg],
),
schema=schema,
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _named_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("email", String()),
Column("user_id", Integer, ForeignKey("user.id", name="ufk")),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _selfref_fk_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("parent_id", Integer, ForeignKey("tname.id")),
Column("data", String),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _boolean_no_ck_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _enum_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", Enum("a", "b", "c", create_constraint=True)),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _server_default_fixture(self, table_args=(), table_kwargs={}):
m = MetaData()
t = Table(
"tname",
m,
Column("id", Integer, primary_key=True),
Column("thing", String(), server_default=""),
)
return ApplyBatchImpl(self.impl, t, table_args, table_kwargs, False)
def _assert_impl(
self,
impl,
colnames=None,
ddl_contains=None,
ddl_not_contains=None,
dialect="default",
schema=None,
):
context = op_fixture(dialect=dialect)
impl._create(context.impl)
if colnames is None:
colnames = ["id", "x", "y"]
eq_(impl.new_table.c.keys(), colnames)
pk_cols = [col for col in impl.new_table.c if col.primary_key]
eq_(list(impl.new_table.primary_key), pk_cols)
create_stmt = str(
CreateTable(impl.new_table).compile(dialect=context.dialect)
)
create_stmt = re.sub(r"[\n\t]", "", create_stmt)
idx_stmt = ""
# create indexes; these should be created in terms of the
# final table name
impl.new_table.name = impl.table.name
for idx in impl._gather_indexes_from_both_tables():
idx_stmt += str(CreateIndex(idx).compile(dialect=context.dialect))
idx_stmt = re.sub(r"[\n\t]", "", idx_stmt)
# revert new table name to the temp name, assertions below
# are looking for the temp name
impl.new_table.name = ApplyBatchImpl._calc_temp_name(impl.table.name)
if ddl_contains:
assert ddl_contains in create_stmt + idx_stmt
if ddl_not_contains:
assert ddl_not_contains not in create_stmt + idx_stmt
expected = [create_stmt]
if schema:
args = {"schema": "%s." % schema}
else:
args = {"schema": ""}
args["temp_name"] = impl.new_table.name
args["colnames"] = ", ".join(
[
impl.new_table.c[name].name
for name in colnames
if name in impl.table.c
]
)
args["tname_colnames"] = ", ".join(
"CAST(%(schema)stname.%(name)s AS %(type)s) AS %(cast_label)s"
% {
"schema": args["schema"],
"name": name,
"type": impl.new_table.c[name].type,
"cast_label": name if sqla_14 else "anon_1",
}
if (
impl.new_table.c[name].type._type_affinity
is not impl.table.c[name].type._type_affinity
)
else "%(schema)stname.%(name)s"
% {"schema": args["schema"], "name": name}
for name in colnames
if name in impl.table.c
)
expected.extend(
[
"INSERT INTO %(schema)s%(temp_name)s (%(colnames)s) "
"SELECT %(tname_colnames)s FROM %(schema)stname" % args,
"DROP TABLE %(schema)stname" % args,
"ALTER TABLE %(schema)s%(temp_name)s "
"RENAME TO %(schema)stname" % args,
]
)
if idx_stmt:
expected.append(idx_stmt)
context.assert_(*expected)
return impl.new_table
def test_change_type(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", type_=String)
new_table = self._assert_impl(impl)
assert new_table.c.x.type._type_affinity is String
def test_rename_col(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", name="q")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.name, "q")
def test_rename_col_w_index(self):
impl = self._ix_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(
impl, ddl_contains="CREATE INDEX ix1 ON tname (y2)"
)
eq_(new_table.c.y.name, "y2")
def test_rename_col_w_uq(self):
impl = self._uq_fixture()
impl.alter_column("tname", "y", name="y2")
new_table = self._assert_impl(impl, ddl_contains="UNIQUE (y2)")
eq_(new_table.c.y.name, "y2")
def test_alter_column_comment(self):
impl = self._simple_fixture()
impl.alter_column("tname", "x", comment="some comment")
new_table = self._assert_impl(impl)
eq_(new_table.c.x.comment, "some comment")
def test_add_column_comment(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("q", Integer, comment="some comment"))
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "q"])
eq_(new_table.c.q.comment, "some comment")
def test_rename_col_boolean(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (bflag IN (0, 1)",
colnames=["id", "flag"],
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_change_type_schematype_to_non(self):
impl = self._boolean_fixture()
impl.alter_column("tname", "flag", type_=Integer)
new_table = self._assert_impl(
impl, colnames=["id", "flag"], ddl_not_contains="CHECK"
)
assert new_table.c.flag.type._type_affinity is Integer
# NOTE: we can't do test_change_type_non_to_schematype
# at this level because the "add_constraint" part of this
# comes from toimpl.py, which we aren't testing here
def test_rename_col_boolean_no_ck(self):
impl = self._boolean_no_ck_fixture()
impl.alter_column("tname", "flag", name="bflag")
new_table = self._assert_impl(
impl, ddl_not_contains="CHECK", colnames=["id", "flag"]
)
eq_(new_table.c.flag.name, "bflag")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
0,
)
def test_rename_col_enum(self):
impl = self._enum_fixture()
impl.alter_column("tname", "thing", name="thang")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (thang IN ('a', 'b', 'c')",
colnames=["id", "thing"],
)
eq_(new_table.c.thing.name, "thang")
eq_(
len(
[
const
for const in new_table.constraints
if isinstance(const, CheckConstraint)
]
),
1,
)
def test_rename_col_literal_ck(self):
impl = self._literal_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
# note this is wrong, we don't dig into the SQL
impl,
ddl_contains="CHECK (email LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_literal_ck_workaround(self):
impl = self._literal_ck_fixture(
copy_from=Table(
"tname",
MetaData(),
Column("id", Integer, primary_key=True),
Column("email", String),
),
table_args=[CheckConstraint("emol LIKE '%@%'")],
)
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_rename_col_sql_ck(self):
impl = self._sql_ck_fixture()
impl.alter_column("tname", "email", name="emol")
new_table = self._assert_impl(
impl,
ddl_contains="CHECK (emol LIKE '%@%')",
colnames=["id", "email"],
)
eq_(
len(
[
c
for c in new_table.constraints
if isinstance(c, CheckConstraint)
]
),
1,
)
eq_(new_table.c.email.name, "emol")
def test_add_col(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("x", "id", "y")])
new_table = self._assert_impl(impl, colnames=["x", "id", "y"])
eq_(new_table.c.x.name, "x")
def test_add_col_partial_reordering(self):
impl = self._simple_fixture(partial_reordering=[("id", "x", "g", "y")])
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col)
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", col, insert_before="x")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_beginning(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="id")
new_table = self._assert_impl(impl, colnames=["g", "id", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_before="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "g", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_middle(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(impl, colnames=["id", "g", "x", "y"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_penultimate(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="x")
self._assert_impl(impl, colnames=["id", "x", "g", "y"])
def test_add_col_insert_after_end(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="y")
new_table = self._assert_impl(impl, colnames=["id", "x", "y", "g"])
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_plus_no_order(self):
impl = self._simple_fixture()
# operations.add_column produces a table
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer))
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_no_order_plus_insert_after(self):
impl = self._simple_fixture()
col = Column("g", Integer)
# operations.add_column produces a table
t = self.op.schema_obj.table("tname", col) # noqa
impl.add_column("tname", Column("q", Integer))
impl.add_column("tname", Column("g", Integer), insert_after="id")
new_table = self._assert_impl(
impl, colnames=["id", "g", "x", "y", "q"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_after_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_after="g")
new_table = self._assert_impl(
impl, colnames=["id", "g", "q", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_col_insert_before_another_insert(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("g", Integer), insert_after="id")
impl.add_column("tname", Column("q", Integer), insert_before="g")
new_table = self._assert_impl(
impl, colnames=["id", "q", "g", "x", "y"]
)
eq_(new_table.c.g.name, "g")
def test_add_server_default(self):
impl = self._simple_fixture()
impl.alter_column("tname", "y", server_default="10")
new_table = self._assert_impl(impl, ddl_contains="DEFAULT '10'")
eq_(new_table.c.y.server_default.arg, "10")
def test_drop_server_default(self):
impl = self._server_default_fixture()
impl.alter_column("tname", "thing", server_default=None)
new_table = self._assert_impl(
impl, colnames=["id", "thing"], ddl_not_contains="DEFAULT"
)
eq_(new_table.c.thing.server_default, None)
def test_rename_col_pk(self):
impl = self._simple_fixture()
impl.alter_column("tname", "id", name="foobar")
new_table = self._assert_impl(
impl, ddl_contains="PRIMARY KEY (foobar)"
)
eq_(new_table.c.id.name, "foobar")
eq_(list(new_table.primary_key), [new_table.c.id])
def test_rename_col_fk(self):
impl = self._fk_fixture()
impl.alter_column("tname", "user_id", name="foobar")
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_contains='FOREIGN KEY(foobar) REFERENCES "user" (id)',
)
eq_(new_table.c.user_id.name, "foobar")
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_regen_multi_fk(self):
impl = self._multi_fk_fixture()
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES "user" (id, id_version)',
)
def test_regen_multi_fk_schema(self):
impl = self._multi_fk_fixture(schema="foo_schema")
self._assert_impl(
impl,
colnames=[
"id",
"email",
"user_id_1",
"user_id_2",
"user_id_3",
"user_id_version",
],
ddl_contains="FOREIGN KEY(user_id_3, user_id_version) "
'REFERENCES foo_schema."user" (id, id_version)',
schema="foo_schema",
)
def test_do_not_add_existing_columns_columns(self):
impl = self._multi_fk_fixture()
meta = impl.table.metadata
cid = Column("id", Integer())
user = Table("user", meta, cid)
fk = [
c
for c in impl.unnamed_constraints
if isinstance(c, ForeignKeyConstraint)
]
impl._setup_referent(meta, fk[0])
is_(user.c.id, cid)
def test_drop_col(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("x"))
new_table = self._assert_impl(impl, colnames=["id", "y"])
assert "y" in new_table.c
assert "x" not in new_table.c
def test_drop_col_remove_pk(self):
impl = self._simple_fixture()
impl.drop_column("tname", column("id"))
new_table = self._assert_impl(
impl, colnames=["x", "y"], ddl_not_contains="PRIMARY KEY"
)
assert "y" in new_table.c
assert "id" not in new_table.c
assert not new_table.primary_key
def test_drop_col_remove_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("user_id"))
new_table = self._assert_impl(
impl, colnames=["id", "email"], ddl_not_contains="FOREIGN KEY"
)
assert "user_id" not in new_table.c
assert not new_table.foreign_keys
def test_drop_col_retain_fk(self):
impl = self._fk_fixture()
impl.drop_column("tname", column("email"))
new_table = self._assert_impl(
impl,
colnames=["id", "user_id"],
ddl_contains='FOREIGN KEY(user_id) REFERENCES "user" (id)',
)
assert "email" not in new_table.c
assert new_table.c.user_id.foreign_keys
def test_drop_col_retain_fk_selfref(self):
impl = self._selfref_fk_fixture()
impl.drop_column("tname", column("data"))
new_table = self._assert_impl(impl, colnames=["id", "parent_id"])
assert "data" not in new_table.c
assert new_table.c.parent_id.foreign_keys
def test_add_fk(self):
impl = self._simple_fixture()
impl.add_column("tname", Column("user_id", Integer))
fk = self.op.schema_obj.foreign_key_constraint(
"fk1", "tname", "user", ["user_id"], ["id"]
)
impl.add_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "x", "y", "user_id"],
ddl_contains="CONSTRAINT fk1 FOREIGN KEY(user_id) "
'REFERENCES "user" (id)',
)
eq_(
list(new_table.c.user_id.foreign_keys)[0]._get_colspec(), "user.id"
)
def test_drop_fk(self):
impl = self._named_fk_fixture()
fk = ForeignKeyConstraint([], [], name="ufk")
impl.drop_constraint(fk)
new_table = self._assert_impl(
impl,
colnames=["id", "email", "user_id"],
ddl_not_contains="CONSTRAINT ufk",
)
eq_(list(new_table.foreign_keys), [])
def test_add_uq(self):
impl = self._simple_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.add_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT uq1 UNIQUE",
)
def test_drop_uq(self):
impl = self._uq_fixture()
uq = self.op.schema_obj.unique_constraint("uq1", "tname", ["y"])
impl.drop_constraint(uq)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_ck_unnamed(self):
"""test for #1195"""
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint(_NONE_NAME, "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CHECK (y > 5)",
)
def test_add_ck(self):
impl = self._simple_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.add_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_table(self):
impl = self._named_ck_table_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_drop_ck_col(self):
impl = self._named_ck_col_fixture()
ck = self.op.schema_obj.check_constraint("ck1", "tname", "y > 5")
impl.drop_constraint(ck)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT ck1 CHECK (y > 5)",
)
def test_create_index(self):
impl = self._simple_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.create_index(ix)
self._assert_impl(
impl, colnames=["id", "x", "y"], ddl_contains="CREATE INDEX ix1"
)
def test_drop_index(self):
impl = self._ix_fixture()
ix = self.op.schema_obj.index("ix1", "tname", ["y"])
impl.drop_index(ix)
self._assert_impl(
impl,
colnames=["id", "x", "y"],
ddl_not_contains="CONSTRAINT uq1 UNIQUE",
)
def test_add_table_opts(self):
impl = self._simple_fixture(table_kwargs={"mysql_engine": "InnoDB"})
self._assert_impl(impl, ddl_contains="ENGINE=InnoDB", dialect="mysql")
def test_drop_pk(self):
impl = self._pk_fixture()
pk = self.op.schema_obj.primary_key_constraint("mypk", "tname", ["id"])
impl.drop_constraint(pk)
new_table = self._assert_impl(impl)
assert not new_table.c.id.primary_key
assert not len(new_table.primary_key)
class BatchAPITest(TestBase):
@contextmanager
def _fixture(self, schema=None):
migration_context = mock.Mock(
opts={},
impl=mock.MagicMock(__dialect__="sqlite", connection=object()),
)
op = Operations(migration_context)
batch = op.batch_alter_table(
"tname", recreate="never", schema=schema
).__enter__()
mock_schema = mock.MagicMock()
with mock.patch("alembic.operations.schemaobj.sa_schema", mock_schema):
yield batch
batch.impl.flush()
self.mock_schema = mock_schema
def test_drop_col(self):
with self._fixture() as batch:
batch.drop_column("q")
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.drop_column(
"tname", self.mock_schema.Column(), schema=None
)
],
)
def test_add_col(self):
column = Column("w", String(50))
with self._fixture() as batch:
batch.add_column(column)
assert (
mock.call.add_column("tname", column, schema=None)
in batch.impl.operations.impl.mock_calls
)
def test_create_fk(self):
with self._fixture() as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_fk_schema(self):
with self._fixture(schema="foo") as batch:
batch.create_foreign_key("myfk", "user", ["x"], ["y"])
eq_(
self.mock_schema.ForeignKeyConstraint.mock_calls,
[
mock.call(
["x"],
["user.y"],
onupdate=None,
ondelete=None,
name="myfk",
initially=None,
deferrable=None,
match=None,
)
],
)
eq_(
self.mock_schema.Table.mock_calls,
[
mock.call(
"user",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema=None,
),
mock.call(
"tname",
self.mock_schema.MetaData(),
self.mock_schema.Column(),
schema="foo",
),
mock.call().append_constraint(
self.mock_schema.ForeignKeyConstraint()
),
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.ForeignKeyConstraint()
)
],
)
def test_create_uq(self):
with self._fixture() as batch:
batch.create_unique_constraint("uq1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.UniqueConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="uq1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.UniqueConstraint())],
)
def test_create_pk(self):
with self._fixture() as batch:
batch.create_primary_key("pk1", ["a", "b"])
eq_(
self.mock_schema.Table().c.__getitem__.mock_calls,
[mock.call("a"), mock.call("b")],
)
eq_(
self.mock_schema.PrimaryKeyConstraint.mock_calls,
[
mock.call(
self.mock_schema.Table().c.__getitem__(),
self.mock_schema.Table().c.__getitem__(),
name="pk1",
)
],
)
eq_(
batch.impl.operations.impl.mock_calls,
[
mock.call.add_constraint(
self.mock_schema.PrimaryKeyConstraint()
)
],
)
def test_create_check(self):
expr = text("a > b")
with self._fixture() as batch:
batch.create_check_constraint("ck1", expr)
eq_(
self.mock_schema.CheckConstraint.mock_calls,
[mock.call(expr, name="ck1")],
)
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.add_constraint(self.mock_schema.CheckConstraint())],
)
def test_drop_constraint(self):
with self._fixture() as batch:
batch.drop_constraint("uq1")
eq_(self.mock_schema.Constraint.mock_calls, [mock.call(name="uq1")])
eq_(
batch.impl.operations.impl.mock_calls,
[mock.call.drop_constraint(self.mock_schema.Constraint())],
)
class CopyFromTest(TestBase):
def _fixture(self):
self.metadata = MetaData()
self.table = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
)
context = op_fixture(dialect="sqlite", as_sql=True)
self.op = Operations(context)
return context
def test_change_type(self):
context = self._fixture()
self.table.append_column(Column("toj", Text))
self.table.append_column(Column("fromj", JSON))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.alter_column("toj", type_=JSON)
batch_op.alter_column("fromj", type_=Text)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, toj JSON, fromj TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, toj, fromj) "
"SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x, foo.toj, "
"CAST(foo.fromj AS TEXT) AS %s FROM foo"
% (
("data" if sqla_14 else "anon_1"),
("fromj" if sqla_14 else "anon_2"),
),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_from_schematype(self):
context = self._fixture()
self.table.append_column(
Column("y", Boolean(create_constraint=True, name="ck1"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS INTEGER) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_name_from_existing_variant_type(self):
"""test #982"""
context = self._fixture()
self.table.append_column(
Column("y", Text().with_variant(Text(10000), "mysql"))
)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
column_name="y",
new_column_name="q",
existing_type=Text().with_variant(Text(10000), "mysql"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, q TEXT, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x, q) "
"SELECT foo.id, foo.data, foo.x, foo.y FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_change_type_to_schematype(self):
context = self._fixture()
self.table.append_column(Column("y", Integer))
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column(
"y",
existing_type=Integer,
type_=Boolean(create_constraint=True, name="ck1"),
)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, y BOOLEAN, PRIMARY KEY (id), "
"CONSTRAINT ck1 CHECK (y IN (0, 1)))",
"INSERT INTO _alembic_tmp_foo (id, data, x, y) SELECT foo.id, "
"foo.data, foo.x, CAST(foo.y AS BOOLEAN) AS %s FROM foo"
% (("y" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_w_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), "
"x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table, recreate="always"
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR(50), x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) "
"SELECT foo.id, foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
def test_create_drop_index_wo_always(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_("CREATE UNIQUE INDEX ix_data ON foo (data)")
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
context.assert_("DROP INDEX ix_data")
def test_create_drop_index_w_other_ops(self):
context = self._fixture()
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.alter_column("data", type_=Integer)
batch_op.create_index("ix_data", ["data"], unique=True)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data INTEGER, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"CAST(foo.data AS INTEGER) AS %s, foo.x FROM foo"
% (("data" if sqla_14 else "anon_1"),),
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
"CREATE UNIQUE INDEX ix_data ON foo (data)",
)
context.clear_assertions()
Index("ix_data", self.table.c.data, unique=True)
with self.op.batch_alter_table(
"foo", copy_from=self.table
) as batch_op:
batch_op.drop_index("ix_data")
batch_op.alter_column("data", type_=String)
context.assert_(
"CREATE TABLE _alembic_tmp_foo (id INTEGER NOT NULL, "
"data VARCHAR, x INTEGER, PRIMARY KEY (id))",
"INSERT INTO _alembic_tmp_foo (id, data, x) SELECT foo.id, "
"foo.data, foo.x FROM foo",
"DROP TABLE foo",
"ALTER TABLE _alembic_tmp_foo RENAME TO foo",
)
class BatchRoundTripTest(TestBase):
__only_on__ = "sqlite"
def setUp(self):
self.conn = config.db.connect()
self.metadata = MetaData()
t1 = Table(
"foo",
self.metadata,
Column("id", Integer, primary_key=True),
Column("data", String(50)),
Column("x", Integer),
mysql_engine="InnoDB",
)
with self.conn.begin():
t1.create(self.conn)
self.conn.execute(
t1.insert(),
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
],
)
context = MigrationContext.configure(self.conn)
self.op = Operations(context)
def tearDown(self):
# why commit? because SQLite has inconsistent treatment
# of transactional DDL. A test that runs CREATE TABLE and then
# ALTER TABLE to change the name of that table, will end up
# committing the CREATE TABLE but not the ALTER. As batch mode
# does this with a temp table name that's not even in the
# metadata collection, we don't have an explicit drop for it
# (though we could do that too). calling commit means the
# ALTER will go through and the drop_all() will then catch it.
_safe_commit_connection_transaction(self.conn)
with self.conn.begin():
self.metadata.drop_all(self.conn)
self.conn.close()
@contextmanager
def _sqlite_referential_integrity(self):
self.conn.exec_driver_sql("PRAGMA foreign_keys=ON")
try:
yield
finally:
self.conn.exec_driver_sql("PRAGMA foreign_keys=OFF")
# as these tests are typically intentional fails, clean out
# tables left over
m = MetaData()
m.reflect(self.conn)
with self.conn.begin():
m.drop_all(self.conn)
def _no_pk_fixture(self):
with self.conn.begin():
nopk = Table(
"nopk",
self.metadata,
Column("a", Integer),
Column("b", Integer),
Column("c", Integer),
mysql_engine="InnoDB",
)
nopk.create(self.conn)
self.conn.execute(
nopk.insert(),
[{"a": 1, "b": 2, "c": 3}, {"a": 2, "b": 4, "c": 5}],
)
return nopk
def _table_w_index_fixture(self):
with self.conn.begin():
t = Table(
"t_w_ix",
self.metadata,
Column("id", Integer, primary_key=True),
Column("thing", Integer),
Column("data", String(20)),
)
Index("ix_thing", t.c.thing)
t.create(self.conn)
return t
def _boolean_fixture(self):
with self.conn.begin():
t = Table(
"hasbool",
self.metadata,
Column("x", Boolean(create_constraint=True, name="ck1")),
Column("y", Integer),
)
t.create(self.conn)
def _timestamp_fixture(self):
with self.conn.begin():
t = Table("hasts", self.metadata, Column("x", DateTime()))
t.create(self.conn)
return t
def _ck_constraint_fixture(self):
with self.conn.begin():
t = Table(
"ck_table",
self.metadata,
Column("id", Integer, nullable=False),
CheckConstraint("id is not NULL", name="ck"),
)
t.create(self.conn)
return t
def _datetime_server_default_fixture(self):
return func.datetime("now", "localtime")
def _timestamp_w_expr_default_fixture(self):
with self.conn.begin():
t = Table(
"hasts",
self.metadata,
Column(
"x",
DateTime(),
server_default=self._datetime_server_default_fixture(),
nullable=False,
),
)
t.create(self.conn)
return t
def _int_to_boolean_fixture(self):
with self.conn.begin():
t = Table("hasbool", self.metadata, Column("x", Integer))
t.create(self.conn)
def test_add_constraint_type(self):
"""test for #1195."""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("q", Boolean(create_constraint=True)))
insp = inspect(self.conn)
assert {
c["type"]._type_affinity
for c in insp.get_columns("foo")
if c["name"] == "q"
}.intersection([Boolean, Integer])
def test_change_type_boolean_to_int(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x",
type_=Integer,
existing_type=Boolean(create_constraint=True, name="ck1"),
)
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def test_no_net_change_timestamp(self):
t = self._timestamp_fixture()
import datetime
with self.conn.begin():
self.conn.execute(
t.insert(), {"x": datetime.datetime(2012, 5, 18, 15, 32, 5)}
)
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column("x", type_=DateTime())
eq_(
self.conn.execute(_select(t.c.x)).fetchall(),
[(datetime.datetime(2012, 5, 18, 15, 32, 5),)],
)
def test_no_net_change_timestamp_w_default(self):
t = self._timestamp_w_expr_default_fixture()
with self.op.batch_alter_table("hasts") as batch_op:
batch_op.alter_column(
"x",
type_=DateTime(),
nullable=False,
server_default=self._datetime_server_default_fixture(),
)
with self.conn.begin():
self.conn.execute(t.insert())
res = self.conn.execute(_select(t.c.x))
if sqla_14:
assert res.scalar_one_or_none() is not None
else:
row = res.fetchone()
assert row["x"] is not None
def test_drop_col_schematype(self):
self._boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.drop_column(
"x", existing_type=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
assert "x" not in (c["name"] for c in insp.get_columns("hasbool"))
def test_change_type_int_to_boolean(self):
self._int_to_boolean_fixture()
with self.op.batch_alter_table("hasbool") as batch_op:
batch_op.alter_column(
"x", type_=Boolean(create_constraint=True, name="ck1")
)
insp = inspect(self.conn)
if exclusions.against(config, "sqlite"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Boolean],
)
elif exclusions.against(config, "mysql"):
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("hasbool")
if c["name"] == "x"
],
[Integer],
)
def _assert_data(self, data, tablename="foo"):
res = self.conn.execute(text("select * from %s" % tablename))
if sqla_14:
res = res.mappings()
eq_([dict(row) for row in res], data)
def test_ix_existing(self):
self._table_w_index_fixture()
with self.op.batch_alter_table("t_w_ix") as batch_op:
batch_op.alter_column("data", type_=String(30))
batch_op.create_index("ix_data", ["data"])
insp = inspect(self.conn)
eq_(
{
(ix["name"], tuple(ix["column_names"]))
for ix in insp.get_indexes("t_w_ix")
},
{("ix_data", ("data",)), ("ix_thing", ("thing",))},
)
def test_fk_points_to_me_auto(self):
self._test_fk_points_to_me("auto")
# in particular, this tests that the failures
# on PG and MySQL result in recovery of the batch system,
# e.g. that the _alembic_tmp_temp table is dropped
@config.requirements.no_referential_integrity
def test_fk_points_to_me_recreate(self):
self._test_fk_points_to_me("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_fk_points_to_me_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_fk_points_to_me("auto")
def _test_fk_points_to_me(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("foo", ["id"], ["foo_id"])],
)
def test_selfref_fk_auto(self):
self._test_selfref_fk("auto")
@config.requirements.no_referential_integrity
def test_selfref_fk_recreate(self):
self._test_selfref_fk("always")
@exclusions.only_on("sqlite")
@exclusions.fails(
"intentionally asserting that this "
"doesn't work w/ pragma foreign keys"
)
def test_selfref_fk_sqlite_refinteg(self):
with self._sqlite_referential_integrity():
self._test_selfref_fk("auto")
def _test_selfref_fk(self, recreate):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("bar_id", Integer, ForeignKey("bar.id")),
Column("data", String(50)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(
bar.insert(), {"id": 1, "data": "x", "bar_id": None}
)
self.conn.execute(
bar.insert(), {"id": 2, "data": "y", "bar_id": 1}
)
with self.op.batch_alter_table("bar", recreate=recreate) as batch_op:
batch_op.alter_column(
"data", new_column_name="newdata", existing_type=String(50)
)
insp = inspect(self.conn)
eq_(
[
(
key["referred_table"],
key["referred_columns"],
key["constrained_columns"],
)
for key in insp.get_foreign_keys("bar")
],
[("bar", ["id"], ["bar_id"])],
)
def test_change_type(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("data", type_=Integer)
self._assert_data(
[
{"id": 1, "data": 0, "x": 5},
{"id": 2, "data": 22, "x": 6},
{"id": 3, "data": 8, "x": 7},
{"id": 4, "data": 9, "x": 8},
{"id": 5, "data": 0, "x": 9},
]
)
def test_drop_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def test_drop_pk_col_readd_col(self):
# drop a column, add it back without primary_key=True, should no
# longer be in the constraint
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], [])
def test_drop_pk_col_readd_pk_col(self):
# drop a column, add it back with primary_key=True, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer, primary_key=True))
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
def test_drop_pk_col_readd_col_also_pk_const(self):
# drop a column, add it back without primary_key=True, but then
# also make anew PK constraint that includes it, should remain
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
batch_op.add_column(Column("id", Integer))
batch_op.create_primary_key("newpk", ["id"])
pk_const = inspect(self.conn).get_pk_constraint("foo")
eq_(pk_const["constrained_columns"], ["id"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_pk_constraint(self, recreate):
self._no_pk_fixture()
with self.op.batch_alter_table("nopk", recreate=recreate) as batch_op:
batch_op.create_primary_key("newpk", ["a", "b"])
pk_const = inspect(self.conn).get_pk_constraint("nopk")
with config.requirements.reflects_pk_names.fail_if():
eq_(pk_const["name"], "newpk")
eq_(pk_const["constrained_columns"], ["a", "b"])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_add_ck_constraint(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_check_constraint("newck", text("x > 0"))
ck_consts = inspect(self.conn).get_check_constraints("foo")
ck_consts[0]["sqltext"] = re.sub(
r"[\'\"`\(\)]", "", ck_consts[0]["sqltext"]
)
for ck in ck_consts:
ck.pop("comment", None)
eq_(ck_consts, [{"sqltext": "x > 0", "name": "newck"}])
@testing.combinations(("always",), ("auto",), argnames="recreate")
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint(self, recreate):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate=recreate
) as batch_op:
batch_op.drop_constraint("ck", type_="check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.check_constraint_reflection
def test_drop_ck_constraint_legacy_type(self):
self._ck_constraint_fixture()
with self.op.batch_alter_table(
"ck_table", recreate="always"
) as batch_op:
# matches the docs that were written for this originally
batch_op.drop_constraint("ck", "check")
ck_consts = inspect(self.conn).get_check_constraints("ck_table")
eq_(ck_consts, [])
@config.requirements.unnamed_constraints
def test_drop_foreign_key(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("foo_id", Integer, ForeignKey("foo.id")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "foo_id": 3})
naming_convention = {
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s"
}
with self.op.batch_alter_table(
"bar", naming_convention=naming_convention
) as batch_op:
batch_op.drop_constraint("fk_bar_foo_id_foo", type_="foreignkey")
eq_(inspect(self.conn).get_foreign_keys("bar"), [])
def test_drop_column_fk_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_column("data")
self._assert_data(
[
{"id": 1, "x": 5},
{"id": 2, "x": 6},
{"id": 3, "x": 7},
{"id": 4, "x": 8},
{"id": 5, "x": 9},
]
)
def _assert_table_comment(self, tname, comment):
insp = inspect(self.conn)
tcomment = insp.get_table_comment(tname)
eq_(tcomment, {"text": comment})
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.create_unique_constraint("newuk", ["x"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x"]}],
)
@testing.combinations(("always",), ("auto",), argnames="recreate")
def test_add_uq_plus_col(self, recreate):
with self.op.batch_alter_table("foo", recreate=recreate) as batch_op:
batch_op.add_column(Column("y", Integer))
batch_op.create_unique_constraint("newuk", ["x", "y"])
uq_consts = inspect(self.conn).get_unique_constraints("foo")
eq_(
[
{"name": uc["name"], "column_names": uc["column_names"]}
for uc in uq_consts
],
[{"name": "newuk", "column_names": ["x", "y"]}],
)
@config.requirements.comments
def test_add_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
self._assert_table_comment("foo", "some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment(
"some new comment", existing_comment="some comment"
)
self._assert_table_comment("foo", "some new comment")
@config.requirements.comments
def test_drop_table_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.create_table_comment("some comment")
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_table_comment(existing_comment="some comment")
self._assert_table_comment("foo", None)
def _assert_column_comment(self, tname, cname, comment):
insp = inspect(self.conn)
cols = {col["name"]: col for col in insp.get_columns(tname)}
eq_(cols[cname]["comment"], comment)
@config.requirements.comments
def test_add_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_add_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(Column("y", Integer, comment="some comment"))
self._assert_column_comment("foo", "y", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "y": None},
{"id": 2, "data": "22", "x": 6, "y": None},
{"id": 3, "data": "8.5", "x": 7, "y": None},
{"id": 4, "data": "9.46", "x": 8, "y": None},
{"id": 5, "data": "d5", "x": 9, "y": None},
]
)
@config.requirements.comments
def test_alter_column_comment(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column(
"x", existing_type=Integer(), comment="some comment"
)
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
@config.requirements.comments
def test_alter_column_comment_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.alter_column("x", comment="some comment")
self._assert_column_comment("foo", "x", "some comment")
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
def test_rename_column(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("x", new_column_name="y")
self._assert_data(
[
{"id": 1, "data": "d1", "y": 5},
{"id": 2, "data": "22", "y": 6},
{"id": 3, "data": "8.5", "y": 7},
{"id": 4, "data": "9.46", "y": 8},
{"id": 5, "data": "d5", "y": 9},
]
)
def test_rename_column_boolean(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar") as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
# @config.requirements.check_constraint_reflection
def test_rename_column_boolean_named_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=True, name="ck1")),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
with self.op.batch_alter_table("bar", recreate="always") as batch_op:
batch_op.alter_column(
"flag",
new_column_name="bflag",
existing_type=Boolean(create_constraint=True, name="ck1"),
)
self._assert_data(
[{"id": 1, "bflag": True}, {"id": 2, "bflag": False}], "bar"
)
@config.requirements.non_native_boolean
def test_rename_column_non_native_boolean_no_ck(self):
bar = Table(
"bar",
self.metadata,
Column("id", Integer, primary_key=True),
Column("flag", Boolean(create_constraint=False)),
mysql_engine="InnoDB",
)
with self.conn.begin():
bar.create(self.conn)
self.conn.execute(bar.insert(), {"id": 1, "flag": True})
self.conn.execute(bar.insert(), {"id": 2, "flag": False})
self.conn.execute(
# override Boolean type which as of 1.1 coerces numerics
# to 1/0
text("insert into bar (id, flag) values (:id, :flag)"),
{"id": 3, "flag": 5},
)
with self.op.batch_alter_table(
"bar",
reflect_args=[Column("flag", Boolean(create_constraint=False))],
) as batch_op:
batch_op.alter_column(
"flag", new_column_name="bflag", existing_type=Boolean
)
self._assert_data(
[
{"id": 1, "bflag": True},
{"id": 2, "bflag": False},
{"id": 3, "bflag": 5},
],
"bar",
)
def test_drop_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.drop_column("id")
self._assert_data(
[
{"data": "d1", "x": 5},
{"data": "22", "x": 6},
{"data": "8.5", "x": 7},
{"data": "9.46", "x": 8},
{"data": "d5", "x": 9},
]
)
def test_rename_column_pk(self):
with self.op.batch_alter_table("foo") as batch_op:
batch_op.alter_column("id", new_column_name="ident")
self._assert_data(
[
{"ident": 1, "data": "d1", "x": 5},
{"ident": 2, "data": "22", "x": 6},
{"ident": 3, "data": "8.5", "x": 7},
{"ident": 4, "data": "9.46", "x": 8},
{"ident": 5, "data": "d5", "x": 9},
]
)
def test_add_column_auto(self):
# note this uses ALTER
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(config.db).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_add_column_auto_server_default_calculated(self):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2",
DateTime(),
server_default=self._datetime_server_default_fixture(),
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": mock.ANY},
{"id": 2, "data": "22", "x": 6, "data2": mock.ANY},
{"id": 3, "data": "8.5", "x": 7, "data2": mock.ANY},
{"id": 4, "data": "9.46", "x": 8, "data2": mock.ANY},
{"id": 5, "data": "d5", "x": 9, "data2": mock.ANY},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@testing.combinations((True,), (False,))
@testing.exclusions.only_on("sqlite")
@config.requirements.computed_columns
def test_add_column_auto_generated(self, persisted):
"""test #883"""
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column(
"data2", Integer, Computed("1 + 1", persisted=persisted)
)
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": 2},
{"id": 2, "data": "22", "x": 6, "data2": 2},
{"id": 3, "data": "8.5", "x": 7, "data2": 2},
{"id": 4, "data": "9.46", "x": 8, "data2": 2},
{"id": 5, "data": "d5", "x": 9, "data2": 2},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
@config.requirements.identity_columns
def test_add_column_auto_identity(self):
"""test #883"""
self._no_pk_fixture()
with self.op.batch_alter_table("nopk") as batch_op:
batch_op.add_column(Column("id", Integer, Identity()))
self._assert_data(
[
{"a": 1, "b": 2, "c": 3, "id": 1},
{"a": 2, "b": 4, "c": 5, "id": 2},
],
tablename="nopk",
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x"],
)
def test_add_column_insert_before_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data2", "data", "x"],
)
def test_add_column_insert_after_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_after="data",
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "data2", "x"],
)
def test_add_column_insert_before_raise_on_alter(self):
def go():
with self.op.batch_alter_table("foo") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi"),
insert_before="data",
)
assert_raises_message(
alembic_exc.CommandError,
"Can't specify insert_before or insert_after when using ALTER",
go,
)
def test_add_column_recreate(self):
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.add_column(
Column("data2", String(50), server_default="hi")
)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5, "data2": "hi"},
{"id": 2, "data": "22", "x": 6, "data2": "hi"},
{"id": 3, "data": "8.5", "x": 7, "data2": "hi"},
{"id": 4, "data": "9.46", "x": 8, "data2": "hi"},
{"id": 5, "data": "d5", "x": 9, "data2": "hi"},
]
)
eq_(
[col["name"] for col in inspect(self.conn).get_columns("foo")],
["id", "data", "x", "data2"],
)
def test_create_drop_index(self):
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.create_index("ix_data", ["data"], unique=True)
self._assert_data(
[
{"id": 1, "data": "d1", "x": 5},
{"id": 2, "data": "22", "x": 6},
{"id": 3, "data": "8.5", "x": 7},
{"id": 4, "data": "9.46", "x": 8},
{"id": 5, "data": "d5", "x": 9},
]
)
insp = inspect(self.conn)
eq_(
[
dict(
unique=ix["unique"],
name=ix["name"],
column_names=ix["column_names"],
)
for ix in insp.get_indexes("foo")
],
[{"unique": True, "name": "ix_data", "column_names": ["data"]}],
)
with self.op.batch_alter_table("foo", recreate="always") as batch_op:
batch_op.drop_index("ix_data")
insp = inspect(self.conn)
eq_(insp.get_indexes("foo"), [])
class BatchRoundTripMySQLTest(BatchRoundTripTest):
__only_on__ = "mysql", "mariadb"
__backend__ = True
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_rename_column_pk(self):
super().test_rename_column_pk()
@exclusions.fails()
def test_rename_column(self):
super().test_rename_column()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
# fails on mariadb 10.2, succeeds on 10.3
@exclusions.fails_if(config.requirements.mysql_check_col_name_change)
def test_rename_column_boolean(self):
super().test_rename_column_boolean()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
class BatchRoundTripPostgresqlTest(BatchRoundTripTest):
__only_on__ = "postgresql"
__backend__ = True
def _native_boolean_fixture(self):
t = Table(
"has_native_bool",
self.metadata,
Column(
"x",
Boolean(create_constraint=True),
server_default="false",
nullable=False,
),
Column("y", Integer),
)
with self.conn.begin():
t.create(self.conn)
def _datetime_server_default_fixture(self):
return func.current_timestamp()
@exclusions.fails()
def test_drop_pk_col_readd_pk_col(self):
super().test_drop_pk_col_readd_pk_col()
@exclusions.fails()
def test_drop_pk_col_readd_col_also_pk_const(self):
super().test_drop_pk_col_readd_col_also_pk_const()
@exclusions.fails()
def test_change_type(self):
super().test_change_type()
def test_create_drop_index(self):
super().test_create_drop_index()
@exclusions.fails()
def test_change_type_int_to_boolean(self):
super().test_change_type_int_to_boolean()
@exclusions.fails()
def test_change_type_boolean_to_int(self):
super().test_change_type_boolean_to_int()
def test_add_col_table_has_native_boolean(self):
self._native_boolean_fixture()
# to ensure test coverage on SQLAlchemy 1.4 and above,
# force the create_constraint flag to True even though it
# defaults to false in 1.4. this test wants to ensure that the
# "should create" rule is consulted
def listen_for_reflect(inspector, table, column_info):
if isinstance(column_info["type"], Boolean):
column_info["type"].create_constraint = True
with self.op.batch_alter_table(
"has_native_bool",
recreate="always",
reflect_kwargs={
"listeners": [("column_reflect", listen_for_reflect)]
},
) as batch_op:
batch_op.add_column(Column("data", Integer))
insp = inspect(self.conn)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "data"
],
[Integer],
)
eq_(
[
c["type"]._type_affinity
for c in insp.get_columns("has_native_bool")
if c["name"] == "x"
],
[Boolean],
)
class OfflineTest(TestBase):
@testing.fixture
def no_reflect_batch_fixture(self):
staging_env()
def go():
self.cfg = cfg = _no_sql_testing_config(dialect="sqlite")
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String, Table, MetaData
some_table_up = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('bar', String)
)
some_table_down = Table(
"some_table", MetaData(),
Column('id', Integer),
Column('foo', Integer)
)
def upgrade():
with op.batch_alter_table("some_table", copy_from=some_table_up) as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table", copy_from=some_table_down) as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
""" # noqa: E501
% a,
)
yield go
clear_staging_env()
@testing.fixture
def batch_fixture(self):
staging_env()
def go(dialect):
self.cfg = cfg = _no_sql_testing_config(dialect=dialect)
self.a = a = util.rev_id()
script = ScriptDirectory.from_config(cfg)
script.generate_revision(
a, "revision a", refresh=True, head="base"
)
write_script(
script,
a,
"""\
"Rev A"
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy import Column
from sqlalchemy import Integer
from sqlalchemy import String
def upgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.add_column(Column('foo', Integer))
batch_op.drop_column('bar')
def downgrade():
with op.batch_alter_table("some_table") as batch_op:
batch_op.drop_column('foo')
batch_op.add_column(Column('bar', String))
"""
% a,
)
yield go
clear_staging_env()
def test_upgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"ALTER TABLE some_table ADD COLUMN foo INTEGER", buf.getvalue()
)
def test_downgrade_non_batch(self, batch_fixture):
batch_fixture("postgresql")
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"ALTER TABLE some_table DROP COLUMN foo", buf.getvalue()
)
def test_upgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.upgrade(self.cfg, self.a, sql=True)
def test_downgrade_batch_fails_gracefully(self, batch_fixture):
batch_fixture("sqlite")
with expect_raises_message(
CommandError,
"This operation cannot proceed in --sql mode; batch mode with "
"dialect sqlite requires a live database connection with which "
'to reflect the table "some_table"',
):
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
def test_upgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.a, sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
def test_downgrade_batch_no_reflection(self, no_reflect_batch_fixture):
no_reflect_batch_fixture()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, f"{self.a}:base", sql=True)
assert re.search(
r"CREATE TABLE _alembic_tmp_some_table", buf.getvalue()
)
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | ```suggestion
ddl_not_contains="CONSTRAINT ufk",
``` | CaselIT | 8 |
sqlalchemy/alembic | 1,310 | Spelling fixes | Fixes misspellings identified by the [check-spelling action](https://github.com/marketplace/actions/check-spelling).
<!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
The misspellings have been reported at https://github.com/jsoref/alembic/actions/runs/6141700632
The action reports that the changes in this PR would make it happy: https://github.com/jsoref/alembic/actions/runs/6141700754
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2023-09-11 03:56:19+00:00 | 2023-09-11 17:43:22+00:00 | tests/test_postgresql.py | from sqlalchemy import BigInteger
from sqlalchemy import Boolean
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import exc
from sqlalchemy import Float
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import Interval
from sqlalchemy import MetaData
from sqlalchemy import Numeric
from sqlalchemy import Sequence
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import text
from sqlalchemy import types
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy.dialects.postgresql import BYTEA
from sqlalchemy.dialects.postgresql import ExcludeConstraint
from sqlalchemy.dialects.postgresql import HSTORE
from sqlalchemy.dialects.postgresql import JSON
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.dialects.postgresql import TSRANGE
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.sql import column
from sqlalchemy.sql import false
from sqlalchemy.sql import table
from sqlalchemy.sql.expression import literal_column
from alembic import autogenerate
from alembic import command
from alembic import op
from alembic import util
from alembic.autogenerate import api
from alembic.autogenerate.compare import _compare_server_default
from alembic.autogenerate.compare import _compare_tables
from alembic.autogenerate.compare import _render_server_default_for_compare
from alembic.migration import MigrationContext
from alembic.operations import ops
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import combinations
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import eq_ignore_whitespace
from alembic.testing import provide_metadata
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import FutureEngineMixin
from alembic.testing.fixtures import op_fixture
from alembic.testing.fixtures import TablesTest
from alembic.testing.fixtures import TestBase
from alembic.testing.suite._autogen_fixtures import AutogenFixtureTest
from alembic.util import sqla_compat
class PostgresqlOpTest(TestBase):
def test_rename_table_postgresql(self):
context = op_fixture("postgresql")
op.rename_table("t1", "t2")
context.assert_("ALTER TABLE t1 RENAME TO t2")
def test_rename_table_schema_postgresql(self):
context = op_fixture("postgresql")
op.rename_table("t1", "t2", schema="foo")
context.assert_("ALTER TABLE foo.t1 RENAME TO t2")
def test_create_index_postgresql_expressions(self):
context = op_fixture("postgresql")
op.create_index(
"geocoded",
"locations",
[text("lower(coordinates)")],
postgresql_where=text("locations.coordinates != Null"),
)
context.assert_(
"CREATE INDEX geocoded ON locations (lower(coordinates)) "
"WHERE locations.coordinates != Null"
)
def test_create_index_postgresql_where(self):
context = op_fixture("postgresql")
op.create_index(
"geocoded",
"locations",
["coordinates"],
postgresql_where=text("locations.coordinates != Null"),
)
context.assert_(
"CREATE INDEX geocoded ON locations (coordinates) "
"WHERE locations.coordinates != Null"
)
def test_create_index_postgresql_concurrently(self):
context = op_fixture("postgresql")
op.create_index(
"geocoded",
"locations",
["coordinates"],
postgresql_concurrently=True,
)
context.assert_(
"CREATE INDEX CONCURRENTLY geocoded ON locations (coordinates)"
)
@config.requirements.sqlalchemy_14
def test_create_index_postgresql_include(self):
context = op_fixture("postgresql")
op.create_index(
"i", "t", ["c1", "c2"], unique=False, postgresql_include=["inc"]
)
context.assert_("CREATE INDEX i ON t (c1, c2) INCLUDE (inc)")
def test_create_index_postgresql_include_is_none(self):
context = op_fixture("postgresql")
op.create_index("i", "t", ["c1", "c2"], unique=False)
context.assert_("CREATE INDEX i ON t (c1, c2)")
@config.requirements.sqlalchemy_2
def test_create_index_postgresql_if_not_exists(self):
context = op_fixture("postgresql")
op.create_index("i", "t", ["c1", "c2"], if_not_exists=True)
context.assert_("CREATE INDEX IF NOT EXISTS i ON t (c1, c2)")
@config.combinations("include_table", "no_table", argnames="include_table")
def test_drop_index_postgresql_concurrently(self, include_table):
context = op_fixture("postgresql")
if include_table == "include_table":
op.drop_index(
"geocoded",
table_name="locations",
postgresql_concurrently=True,
)
else:
op.drop_index("geocoded", postgresql_concurrently=True)
context.assert_("DROP INDEX CONCURRENTLY geocoded")
@config.requirements.sqlalchemy_2
def test_drop_index_postgresql_if_exists(self):
context = op_fixture("postgresql")
op.drop_index("geocoded", if_exists=True)
context.assert_("DROP INDEX IF EXISTS geocoded")
def test_alter_column_type_using(self):
context = op_fixture("postgresql")
op.alter_column("t", "c", type_=Integer, postgresql_using="c::integer")
context.assert_(
"ALTER TABLE t ALTER COLUMN c TYPE INTEGER USING c::integer"
)
def test_col_w_pk_is_serial(self):
context = op_fixture("postgresql")
op.add_column("some_table", Column("q", Integer, primary_key=True))
context.assert_("ALTER TABLE some_table ADD COLUMN q SERIAL NOT NULL")
def test_create_exclude_constraint(self):
context = op_fixture("postgresql")
op.create_exclude_constraint(
"ex1", "t1", ("x", ">"), where="x > 5", using="gist"
)
context.assert_(
"ALTER TABLE t1 ADD CONSTRAINT ex1 EXCLUDE USING gist (x WITH >) "
"WHERE (x > 5)"
)
def test_drop_exclude_or_other_constraint(self):
context = op_fixture("postgresql")
op.drop_constraint("t_excl_x", "TTable", type_=None)
context.assert_('ALTER TABLE "TTable" DROP CONSTRAINT t_excl_x')
def test_create_exclude_constraint_quoted_literal(self):
context = op_fixture("postgresql")
op.create_exclude_constraint(
"ex1",
"SomeTable",
(column("SomeColumn"), ">"),
where='"SomeColumn" > 5',
using="gist",
)
context.assert_(
'ALTER TABLE "SomeTable" ADD CONSTRAINT ex1 EXCLUDE USING gist '
'("SomeColumn" WITH >) WHERE ("SomeColumn" > 5)'
)
def test_create_exclude_constraint_quoted_column(self):
context = op_fixture("postgresql")
op.create_exclude_constraint(
"ex1",
"SomeTable",
(column("SomeColumn"), ">"),
where=column("SomeColumn") > 5,
using="gist",
)
context.assert_(
'ALTER TABLE "SomeTable" ADD CONSTRAINT ex1 EXCLUDE '
'USING gist ("SomeColumn" WITH >) WHERE ("SomeColumn" > 5)'
)
def test_add_column_with_comment(self):
context = op_fixture("postgresql")
op.add_column("t", Column("q", Integer, comment="This is a comment"))
context.assert_(
"ALTER TABLE t ADD COLUMN q INTEGER",
"COMMENT ON COLUMN t.q IS 'This is a comment'",
)
def test_alter_column_with_comment(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
nullable=False,
existing_type=Boolean(),
schema="foo",
comment="This is a column comment",
)
context.assert_(
"ALTER TABLE foo.t ALTER COLUMN c SET NOT NULL",
"COMMENT ON COLUMN foo.t.c IS 'This is a column comment'",
)
def test_alter_column_add_comment(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
existing_type=Boolean(),
schema="foo",
comment="This is a column comment",
)
context.assert_(
"COMMENT ON COLUMN foo.t.c IS 'This is a column comment'"
)
def test_alter_column_add_comment_table_and_column_quoting(self):
context = op_fixture("postgresql")
op.alter_column(
"T",
"C",
existing_type=Boolean(),
schema="foo",
comment="This is a column comment",
)
context.assert_(
'COMMENT ON COLUMN foo."T"."C" IS \'This is a column comment\''
)
def test_alter_column_add_comment_quoting(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
existing_type=Boolean(),
schema="foo",
comment="This is a column 'comment'",
)
context.assert_(
"COMMENT ON COLUMN foo.t.c IS 'This is a column ''comment'''"
)
def test_alter_column_drop_comment(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
existing_type=Boolean(),
schema="foo",
comment=None,
existing_comment="This is a column comment",
)
context.assert_("COMMENT ON COLUMN foo.t.c IS NULL")
def test_create_table_with_comment(self):
context = op_fixture("postgresql")
op.create_table(
"t2",
Column("c1", Integer, primary_key=True),
Column("c2", Integer),
comment="t2 comment",
)
context.assert_(
"CREATE TABLE t2 (c1 SERIAL NOT NULL, "
"c2 INTEGER, PRIMARY KEY (c1))",
"COMMENT ON TABLE t2 IS 't2 comment'",
)
def test_create_table_with_column_comments(self):
context = op_fixture("postgresql")
op.create_table(
"t2",
Column("c1", Integer, primary_key=True, comment="c1 comment"),
Column("c2", Integer, comment="c2 comment"),
comment="t2 comment",
)
context.assert_(
"CREATE TABLE t2 (c1 SERIAL NOT NULL, "
"c2 INTEGER, PRIMARY KEY (c1))",
"COMMENT ON TABLE t2 IS 't2 comment'",
"COMMENT ON COLUMN t2.c1 IS 'c1 comment'",
"COMMENT ON COLUMN t2.c2 IS 'c2 comment'",
)
def test_create_table_comment(self):
# this is handled by SQLAlchemy's compilers
context = op_fixture("postgresql")
op.create_table_comment("t2", comment="t2 table", schema="foo")
context.assert_("COMMENT ON TABLE foo.t2 IS 't2 table'")
def test_drop_table_comment(self):
# this is handled by SQLAlchemy's compilers
context = op_fixture("postgresql")
op.drop_table_comment("t2", existing_comment="t2 table", schema="foo")
context.assert_("COMMENT ON TABLE foo.t2 IS NULL")
@config.requirements.computed_columns
def test_add_column_computed(self):
context = op_fixture("postgresql")
op.add_column(
"t1",
Column("some_column", Integer, sqla_compat.Computed("foo * 5")),
)
context.assert_(
"ALTER TABLE t1 ADD COLUMN some_column "
"INTEGER GENERATED ALWAYS AS (foo * 5) STORED"
)
@combinations(
(lambda: sqla_compat.Computed("foo * 5"), lambda: None),
(lambda: None, lambda: sqla_compat.Computed("foo * 5")),
(
lambda: sqla_compat.Computed("foo * 42"),
lambda: sqla_compat.Computed("foo * 5"),
),
)
@config.requirements.computed_columns
def test_alter_column_computed_not_supported(self, sd, esd):
op_fixture("postgresql")
assert_raises_message(
exc.CompileError,
'Adding or removing a "computed" construct, e.g. '
"GENERATED ALWAYS AS, to or from an existing column is not "
"supported.",
op.alter_column,
"t1",
"c1",
server_default=sd(),
existing_server_default=esd(),
)
@config.requirements.identity_columns
@combinations(
({}, None),
(dict(always=True), None),
(
dict(start=3, increment=33, maxvalue=99, cycle=True),
"INCREMENT BY 33 START WITH 3 MAXVALUE 99 CYCLE",
),
)
def test_add_column_identity(self, kw, text):
context = op_fixture("postgresql")
op.add_column(
"t1",
Column("some_column", Integer, sqla_compat.Identity(**kw)),
)
qualification = "ALWAYS" if kw.get("always", False) else "BY DEFAULT"
options = " (%s)" % text if text else ""
context.assert_(
"ALTER TABLE t1 ADD COLUMN some_column "
"INTEGER GENERATED %s AS IDENTITY%s" % (qualification, options)
)
@config.requirements.identity_columns
@combinations(
({}, None),
(dict(always=True), None),
(
dict(start=3, increment=33, maxvalue=99, cycle=True),
"INCREMENT BY 33 START WITH 3 MAXVALUE 99 CYCLE",
),
)
def test_add_identity_to_column(self, kw, text):
context = op_fixture("postgresql")
op.alter_column(
"t1",
"some_column",
server_default=sqla_compat.Identity(**kw),
existing_server_default=None,
)
qualification = "ALWAYS" if kw.get("always", False) else "BY DEFAULT"
options = " (%s)" % text if text else ""
context.assert_(
"ALTER TABLE t1 ALTER COLUMN some_column ADD "
"GENERATED %s AS IDENTITY%s" % (qualification, options)
)
@config.requirements.identity_columns
def test_remove_identity_from_column(self):
context = op_fixture("postgresql")
op.alter_column(
"t1",
"some_column",
server_default=None,
existing_server_default=sqla_compat.Identity(),
)
context.assert_(
"ALTER TABLE t1 ALTER COLUMN some_column DROP IDENTITY"
)
@config.requirements.identity_columns
@combinations(
({}, dict(always=True), "SET GENERATED ALWAYS"),
(
dict(always=True),
dict(always=False, start=3),
"SET GENERATED BY DEFAULT SET START WITH 3",
),
(
dict(always=True, start=3, increment=2, minvalue=-3, maxvalue=99),
dict(
always=True,
start=3,
increment=1,
minvalue=-3,
maxvalue=99,
cycle=True,
),
"SET CYCLE SET INCREMENT BY 1",
),
(
dict(
always=False,
start=3,
maxvalue=9999,
minvalue=0,
),
dict(always=False, start=3, cache=2),
"SET CACHE 2",
),
(
dict(always=False),
dict(always=None, minvalue=0),
"SET MINVALUE 0",
),
)
def test_change_identity_in_column(self, existing, updated, text):
context = op_fixture("postgresql")
op.alter_column(
"t1",
"some_column",
server_default=sqla_compat.Identity(**updated),
existing_server_default=sqla_compat.Identity(**existing),
)
context.assert_("ALTER TABLE t1 ALTER COLUMN some_column %s" % text)
class PGAutocommitBlockTest(TestBase):
__only_on__ = "postgresql"
__backend__ = True
def setUp(self):
self.conn = conn = config.db.connect()
with conn.begin():
conn.execute(
text("CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy')")
)
def tearDown(self):
with self.conn.begin():
self.conn.execute(text("DROP TYPE mood"))
def test_alter_enum(self, migration_context):
with migration_context.begin_transaction(_per_migration=True):
with migration_context.autocommit_block():
migration_context.execute(
text("ALTER TYPE mood ADD VALUE 'soso'")
)
class PGAutocommitBlockTestFuture(FutureEngineMixin, PGAutocommitBlockTest):
pass
class PGOfflineEnumTest(TestBase):
def setUp(self):
staging_env()
self.cfg = cfg = _no_sql_testing_config()
self.rid = rid = util.rev_id()
self.script = script = ScriptDirectory.from_config(cfg)
script.generate_revision(rid, None, refresh=True)
def tearDown(self):
clear_staging_env()
def _inline_enum_script(self):
write_script(
self.script,
self.rid,
"""
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy.dialects.postgresql import ENUM
from sqlalchemy import Column
def upgrade():
op.create_table("sometable",
Column("data", ENUM("one", "two", "three", name="pgenum"))
)
def downgrade():
op.drop_table("sometable")
"""
% self.rid,
)
def _distinct_enum_script(self):
write_script(
self.script,
self.rid,
"""
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy.dialects.postgresql import ENUM
from sqlalchemy import Column
def upgrade():
enum = ENUM("one", "two", "three", name="pgenum", create_type=False)
enum.create(op.get_bind(), checkfirst=False)
op.create_table("sometable",
Column("data", enum)
)
def downgrade():
op.drop_table("sometable")
ENUM(name="pgenum").drop(op.get_bind(), checkfirst=False)
"""
% self.rid,
)
def test_offline_inline_enum_create(self):
self._inline_enum_script()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.rid, sql=True)
assert (
"CREATE TYPE pgenum AS "
"ENUM ('one', 'two', 'three')" in buf.getvalue()
)
assert "CREATE TABLE sometable (\n data pgenum\n)" in buf.getvalue()
def test_offline_inline_enum_drop(self):
self._inline_enum_script()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, "%s:base" % self.rid, sql=True)
assert "DROP TABLE sometable" in buf.getvalue()
# no drop since we didn't emit events
assert "DROP TYPE pgenum" not in buf.getvalue()
def test_offline_distinct_enum_create(self):
self._distinct_enum_script()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.rid, sql=True)
assert (
"CREATE TYPE pgenum AS ENUM "
"('one', 'two', 'three')" in buf.getvalue()
)
assert "CREATE TABLE sometable (\n data pgenum\n)" in buf.getvalue()
def test_offline_distinct_enum_drop(self):
self._distinct_enum_script()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, "%s:base" % self.rid, sql=True)
assert "DROP TABLE sometable" in buf.getvalue()
assert "DROP TYPE pgenum" in buf.getvalue()
class PostgresqlInlineLiteralTest(TablesTest):
__only_on__ = "postgresql"
__backend__ = True
@classmethod
def define_tables(cls, metadata):
Table("tab", metadata, Column("col", String(50)))
@classmethod
def insert_data(cls, connection):
connection.execute(
text(
"""
insert into tab (col) values
('old data 1'),
('old data 2.1'),
('old data 3')
"""
)
)
def test_inline_percent(self, connection, ops_context):
# TODO: here's the issue, you need to escape this.
tab = table("tab", column("col"))
ops_context.execute(
tab.update()
.where(tab.c.col.like(ops_context.inline_literal("%.%")))
.values(col=ops_context.inline_literal("new data")),
execution_options={"no_parameters": True},
)
eq_(
connection.execute(
text("select count(*) from tab where col='new data'")
).scalar(),
1,
)
class PostgresqlDefaultCompareTest(TestBase):
__only_on__ = "postgresql"
__backend__ = True
@classmethod
def setup_class(cls):
cls.bind = config.db
staging_env()
cls.migration_context = MigrationContext.configure(
connection=cls.bind.connect(),
opts={"compare_type": True, "compare_server_default": True},
)
def setUp(self):
self.metadata = MetaData()
self.autogen_context = api.AutogenContext(self.migration_context)
@classmethod
def teardown_class(cls):
clear_staging_env()
def tearDown(self):
with config.db.begin() as conn:
self.metadata.drop_all(conn)
def _compare_default_roundtrip(
self, type_, orig_default, alternate=None, diff_expected=None
):
diff_expected = (
diff_expected
if diff_expected is not None
else alternate is not None
)
if alternate is None:
alternate = orig_default
t1 = Table(
"test",
self.metadata,
Column("somecol", type_, server_default=orig_default),
)
t2 = Table(
"test",
MetaData(),
Column("somecol", type_, server_default=alternate),
)
t1.create(self.bind)
insp = inspect(self.bind)
cols = insp.get_columns(t1.name)
insp_col = Column(
"somecol", cols[0]["type"], server_default=text(cols[0]["default"])
)
op = ops.AlterColumnOp("test", "somecol")
_compare_server_default(
self.autogen_context,
op,
None,
"test",
"somecol",
insp_col,
t2.c.somecol,
)
diffs = op.to_diff_tuple()
eq_(bool(diffs), diff_expected)
def _compare_default(self, t1, t2, col, rendered):
t1.create(self.bind, checkfirst=True)
insp = inspect(self.bind)
cols = insp.get_columns(t1.name)
ctx = self.autogen_context.migration_context
return ctx.impl.compare_server_default(
None, col, rendered, cols[0]["default"]
)
def test_compare_string_blank_default(self):
self._compare_default_roundtrip(String(8), "")
def test_compare_string_nonblank_default(self):
self._compare_default_roundtrip(String(8), "hi")
def test_compare_interval_str(self):
# this form shouldn't be used but testing here
# for compatibility
self._compare_default_roundtrip(Interval, "14 days")
@config.requirements.postgresql_uuid_ossp
def test_compare_uuid_text(self):
self._compare_default_roundtrip(UUID, text("uuid_generate_v4()"))
def test_compare_interval_text(self):
self._compare_default_roundtrip(Interval, text("'14 days'"))
def test_compare_array_of_integer_text(self):
self._compare_default_roundtrip(
ARRAY(Integer), text("(ARRAY[]::integer[])")
)
def test_compare_current_timestamp_text(self):
self._compare_default_roundtrip(
DateTime(), text("TIMEZONE('utc', CURRENT_TIMESTAMP)")
)
def test_compare_current_timestamp_fn_w_binds(self):
self._compare_default_roundtrip(
DateTime(), func.timezone("utc", func.current_timestamp())
)
def test_compare_integer_str(self):
self._compare_default_roundtrip(Integer(), "5")
def test_compare_integer_text(self):
self._compare_default_roundtrip(Integer(), text("5"))
def test_compare_integer_text_diff(self):
self._compare_default_roundtrip(Integer(), text("5"), "7")
def test_compare_float_str(self):
self._compare_default_roundtrip(Float(), "5.2")
def test_compare_float_text(self):
self._compare_default_roundtrip(Float(), text("5.2"))
def test_compare_float_no_diff1(self):
self._compare_default_roundtrip(
Float(), text("5.2"), "5.2", diff_expected=False
)
def test_compare_float_no_diff2(self):
self._compare_default_roundtrip(
Float(), "5.2", text("5.2"), diff_expected=False
)
def test_compare_float_no_diff3(self):
self._compare_default_roundtrip(
Float(), text("5"), text("5.0"), diff_expected=False
)
def test_compare_float_no_diff4(self):
self._compare_default_roundtrip(
Float(), "5", "5.0", diff_expected=False
)
def test_compare_float_no_diff5(self):
self._compare_default_roundtrip(
Float(), text("5"), "5.0", diff_expected=False
)
def test_compare_float_no_diff6(self):
self._compare_default_roundtrip(
Float(), "5", text("5.0"), diff_expected=False
)
def test_compare_numeric_no_diff(self):
self._compare_default_roundtrip(
Numeric(), text("5"), "5.0", diff_expected=False
)
def test_compare_unicode_literal(self):
self._compare_default_roundtrip(String(), "im a default")
# TOOD: will need to actually eval() the repr() and
# spend more effort figuring out exactly the kind of expression
# to use
def _TODO_test_compare_character_str_w_singlequote(self):
self._compare_default_roundtrip(String(), "hel''lo")
def test_compare_character_str(self):
self._compare_default_roundtrip(String(), "hello")
def test_compare_character_text(self):
self._compare_default_roundtrip(String(), text("'hello'"))
def test_compare_character_str_diff(self):
self._compare_default_roundtrip(String(), "hello", "there")
def test_compare_character_text_diff(self):
self._compare_default_roundtrip(
String(), text("'hello'"), text("'there'")
)
def test_primary_key_skip(self):
"""Test that SERIAL cols are just skipped"""
t1 = Table(
"sometable", self.metadata, Column("id", Integer, primary_key=True)
)
t2 = Table(
"sometable", MetaData(), Column("id", Integer, primary_key=True)
)
assert not self._compare_default(t1, t2, t2.c.id, "")
class PostgresqlDetectSerialTest(TestBase):
__only_on__ = "postgresql"
__backend__ = True
@classmethod
def setup_class(cls):
cls.bind = config.db
staging_env()
def setUp(self):
self.conn = self.bind.connect()
self.migration_context = MigrationContext.configure(
connection=self.conn,
opts={"compare_type": True, "compare_server_default": True},
)
self.autogen_context = api.AutogenContext(self.migration_context)
def tearDown(self):
self.conn.close()
@classmethod
def teardown_class(cls):
clear_staging_env()
@provide_metadata
def _expect_default(self, c_expected, col, seq=None):
Table("t", self.metadata, col)
self.autogen_context.metadata = self.metadata
if seq:
seq._set_metadata(self.metadata)
self.metadata.create_all(config.db)
insp = inspect(config.db)
uo = ops.UpgradeOps(ops=[])
_compare_tables({(None, "t")}, set(), insp, uo, self.autogen_context)
diffs = uo.as_diffs()
tab = diffs[0][1]
eq_(
_render_server_default_for_compare(
tab.c.x.server_default, self.autogen_context
),
c_expected,
)
insp = inspect(config.db)
uo = ops.UpgradeOps(ops=[])
m2 = MetaData()
Table("t", m2, Column("x", BigInteger()))
self.autogen_context.metadata = m2
_compare_tables(
{(None, "t")},
{(None, "t")},
insp,
uo,
self.autogen_context,
)
diffs = uo.as_diffs()
server_default = diffs[0][0][4]["existing_server_default"]
eq_(
_render_server_default_for_compare(
server_default, self.autogen_context
),
c_expected,
)
def test_serial(self):
self._expect_default(None, Column("x", Integer, primary_key=True))
def test_separate_seq(self):
seq = Sequence("x_id_seq")
self._expect_default(
"nextval('x_id_seq'::regclass)",
Column(
"x", Integer, server_default=seq.next_value(), primary_key=True
),
seq,
)
def test_numeric(self):
seq = Sequence("x_id_seq")
self._expect_default(
"nextval('x_id_seq'::regclass)",
Column(
"x",
Numeric(8, 2),
server_default=seq.next_value(),
primary_key=True,
),
seq,
)
def test_no_default(self):
self._expect_default(
None, Column("x", Integer, autoincrement=False, primary_key=True)
)
class PostgresqlAutogenRenderTest(TestBase):
def setUp(self):
ctx_opts = {
"sqlalchemy_module_prefix": "sa.",
"alembic_module_prefix": "op.",
"target_metadata": MetaData(),
}
context = MigrationContext.configure(
dialect_name="postgresql", opts=ctx_opts
)
self.autogen_context = api.AutogenContext(context)
def test_render_add_index_pg_where(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table("t", m, Column("x", String), Column("y", String))
idx = Index(
"foo_idx", t.c.x, t.c.y, postgresql_where=(t.c.y == "something")
)
op_obj = ops.CreateIndexOp.from_index(idx)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"""op.create_index('foo_idx', 't', \
['x', 'y'], unique=False, """
"""postgresql_where=sa.text("y = 'something'"))""",
)
def test_render_server_default_native_boolean(self):
c = Column(
"updated_at", Boolean(), server_default=false(), nullable=False
)
result = autogenerate.render._render_column(c, self.autogen_context)
eq_ignore_whitespace(
result,
"sa.Column('updated_at', sa.Boolean(), "
"server_default=sa.text('false'), "
"nullable=False)",
)
def test_postgresql_array_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(Integer), self.autogen_context
),
"postgresql.ARRAY(sa.Integer())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(DateTime(timezone=True)), self.autogen_context
),
"postgresql.ARRAY(sa.DateTime(timezone=True))",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(BYTEA, as_tuple=True, dimensions=2), self.autogen_context
),
"postgresql.ARRAY(postgresql.BYTEA(), "
"as_tuple=True, dimensions=2)",
)
assert (
"from sqlalchemy.dialects import postgresql"
in self.autogen_context.imports
)
def test_postgresql_hstore_subtypes(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(HSTORE(), self.autogen_context),
"postgresql.HSTORE(text_type=sa.Text())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
HSTORE(text_type=String()), self.autogen_context
),
"postgresql.HSTORE(text_type=sa.String())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
HSTORE(text_type=BYTEA()), self.autogen_context
),
"postgresql.HSTORE(text_type=postgresql.BYTEA())",
)
assert (
"from sqlalchemy.dialects import postgresql"
in self.autogen_context.imports
)
def test_generic_array_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(
types.ARRAY(Integer), self.autogen_context
),
"sa.ARRAY(sa.Integer())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
types.ARRAY(DateTime(timezone=True)), self.autogen_context
),
"sa.ARRAY(sa.DateTime(timezone=True))",
)
assert (
"from sqlalchemy.dialects import postgresql"
not in self.autogen_context.imports
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
types.ARRAY(BYTEA, as_tuple=True, dimensions=2),
self.autogen_context,
),
"sa.ARRAY(postgresql.BYTEA(), as_tuple=True, dimensions=2)",
)
assert (
"from sqlalchemy.dialects import postgresql"
in self.autogen_context.imports
)
def test_array_type_user_defined_inner(self):
def repr_type(typestring, object_, autogen_context):
if typestring == "type" and isinstance(object_, String):
return "foobar.MYVARCHAR"
else:
return False
self.autogen_context.opts.update(render_item=repr_type)
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(String), self.autogen_context
),
"postgresql.ARRAY(foobar.MYVARCHAR)",
)
def test_add_exclude_constraint(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table("t", m, Column("x", String), Column("y", String))
op_obj = ops.AddConstraintOp.from_constraint(
ExcludeConstraint(
(t.c.x, ">"), where=t.c.x != 2, using="gist", name="t_excl_x"
)
)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_exclude_constraint('t_excl_x', "
"'t', (sa.column('x'), '>'), "
"where=sa.text('x != 2'), using='gist')",
)
def test_add_exclude_constraint_case_sensitive(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTAble", m, Column("XColumn", String), Column("YColumn", String)
)
op_obj = ops.AddConstraintOp.from_constraint(
ExcludeConstraint(
(t.c.XColumn, ">"),
where=t.c.XColumn != 2,
using="gist",
name="t_excl_x",
)
)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_exclude_constraint('t_excl_x', 'TTAble', "
"(sa.column('XColumn'), '>'), "
"where=sa.text('\"XColumn\" != 2'), using='gist')",
)
def test_inline_exclude_constraint(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"t",
m,
Column("x", String),
Column("y", String),
ExcludeConstraint(
(column("x"), ">"),
using="gist",
where="x != 2",
name="t_excl_x",
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('t',sa.Column('x', sa.String(), nullable=True),"
"sa.Column('y', sa.String(), nullable=True),"
"postgresql.ExcludeConstraint((sa.column('x'), '>'), "
"where=sa.text('x != 2'), using='gist', name='t_excl_x')"
")",
)
def test_inline_exclude_constraint_case_sensitive(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable", m, Column("XColumn", String), Column("YColumn", String)
)
ExcludeConstraint(
(t.c.XColumn, ">"),
using="gist",
where='"XColumn" != 2',
name="TExclX",
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('XColumn', sa.String(), "
"nullable=True),"
"sa.Column('YColumn', sa.String(), nullable=True),"
"postgresql.ExcludeConstraint((sa.column('XColumn'), '>'), "
"where=sa.text('\"XColumn\" != 2'), using='gist', "
"name='TExclX'))",
)
def test_inline_exclude_constraint_literal_column(self):
"""test for #1184"""
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable",
m,
Column("id", String()),
ExcludeConstraint(
(literal_column("id + 2"), "="), name="TExclID", using="gist"
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('id', sa.String(), "
"nullable=True),"
"postgresql.ExcludeConstraint((sa.literal_column('id + 2'), '='), "
"using='gist', "
"name='TExclID'))",
)
@config.requirements.sqlalchemy_2
def test_inline_exclude_constraint_fn(self):
"""test for #1230"""
autogen_context = self.autogen_context
effective_time = Column("effective_time", DateTime(timezone=True))
expiry_time = Column("expiry_time", DateTime(timezone=True))
m = MetaData()
t = Table(
"TTable",
m,
effective_time,
expiry_time,
ExcludeConstraint(
(func.tstzrange(effective_time, expiry_time), "&&"),
using="gist",
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('effective_time', "
"sa.DateTime(timezone=True), nullable=True),"
"sa.Column('expiry_time', sa.DateTime(timezone=True), "
"nullable=True),postgresql.ExcludeConstraint("
"(sa.text('tstzrange(effective_time, expiry_time)'), "
"'&&'), using='gist'))",
)
@config.requirements.sqlalchemy_2
def test_inline_exclude_constraint_text(self):
"""test for #1184.
Requires SQLAlchemy 2.0.5 due to issue
https://github.com/sqlalchemy/sqlalchemy/issues/9401
"""
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable",
m,
Column("id", String()),
ExcludeConstraint(
(text("id + 2"), "="), name="TExclID", using="gist"
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('id', sa.String(), "
"nullable=True),"
"postgresql.ExcludeConstraint((sa.text('id + 2'), '='), "
"using='gist', "
"name='TExclID'))",
)
def test_drop_exclude_constraint(self):
"""test for #1300"""
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable", m, Column("XColumn", String), Column("YColumn", String)
)
op_obj = ops.DropConstraintOp.from_constraint(
ExcludeConstraint(
(t.c.XColumn, ">"),
where=t.c.XColumn != 2,
using="gist",
name="t_excl_x",
)
)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.drop_constraint('t_excl_x', 'TTable')",
)
def test_json_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(JSON(), self.autogen_context),
"postgresql.JSON(astext_type=sa.Text())",
)
def test_jsonb_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(JSONB(), self.autogen_context),
"postgresql.JSONB(astext_type=sa.Text())",
)
@config.requirements.nulls_not_distinct_sa
def test_render_unique_nulls_not_distinct_constraint(self):
m = MetaData()
t = Table("tbl", m, Column("c", Integer))
uc = UniqueConstraint(
t.c.c,
name="uq_1",
deferrable="XYZ",
postgresql_nulls_not_distinct=True,
)
eq_ignore_whitespace(
autogenerate.render.render_op_text(
self.autogen_context,
ops.AddConstraintOp.from_constraint(uc),
),
"op.create_unique_constraint('uq_1', 'tbl', ['c'], "
"deferrable='XYZ', postgresql_nulls_not_distinct=True)",
)
eq_ignore_whitespace(
autogenerate.render._render_unique_constraint(
uc, self.autogen_context, None
),
"sa.UniqueConstraint('c', deferrable='XYZ', name='uq_1', "
"postgresql_nulls_not_distinct=True)",
)
@config.requirements.nulls_not_distinct_sa
def test_render_index_nulls_not_distinct_constraint(self):
m = MetaData()
t = Table("tbl", m, Column("c", Integer))
idx = Index("ix_42", t.c.c, postgresql_nulls_not_distinct=False)
eq_ignore_whitespace(
autogenerate.render.render_op_text(
self.autogen_context, ops.CreateIndexOp.from_index(idx)
),
"op.create_index('ix_42', 'tbl', ['c'], unique=False, "
"postgresql_nulls_not_distinct=False)",
)
class PGUniqueIndexAutogenerateTest(AutogenFixtureTest, TestBase):
__only_on__ = "postgresql"
__backend__ = True
def test_idx_added_schema(self):
m1 = MetaData()
m2 = MetaData()
Table("add_ix", m1, Column("x", String(50)), schema="test_schema")
Table(
"add_ix",
m2,
Column("x", String(50)),
Index("ix_1", "x"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "add_index")
eq_(diffs[0][1].name, "ix_1")
def test_idx_unchanged_schema(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_ix",
m1,
Column("x", String(50)),
Index("ix_1", "x"),
schema="test_schema",
)
Table(
"add_ix",
m2,
Column("x", String(50)),
Index("ix_1", "x"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs, [])
def test_uq_added_schema(self):
m1 = MetaData()
m2 = MetaData()
Table("add_uq", m1, Column("x", String(50)), schema="test_schema")
Table(
"add_uq",
m2,
Column("x", String(50)),
UniqueConstraint("x", name="ix_1"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "add_constraint")
eq_(diffs[0][1].name, "ix_1")
def test_uq_unchanged_schema(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_uq",
m1,
Column("x", String(50)),
UniqueConstraint("x", name="ix_1"),
schema="test_schema",
)
Table(
"add_uq",
m2,
Column("x", String(50)),
UniqueConstraint("x", name="ix_1"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs, [])
@config.requirements.btree_gist
def test_exclude_const_unchanged(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_excl",
m1,
Column("id", Integer, primary_key=True),
Column("period", TSRANGE),
ExcludeConstraint(("period", "&&"), name="quarters_period_excl"),
)
Table(
"add_excl",
m2,
Column("id", Integer, primary_key=True),
Column("period", TSRANGE),
ExcludeConstraint(("period", "&&"), name="quarters_period_excl"),
)
diffs = self._fixture(m1, m2)
eq_(diffs, [])
def test_same_tname_two_schemas(self):
m1 = MetaData()
m2 = MetaData()
Table("add_ix", m1, Column("x", String(50)), Index("ix_1", "x"))
Table("add_ix", m2, Column("x", String(50)), Index("ix_1", "x"))
Table("add_ix", m2, Column("x", String(50)), schema="test_schema")
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "add_table")
eq_(len(diffs), 1)
def test_uq_dropped(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_uq",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
UniqueConstraint("name", name="uq_name"),
)
Table(
"add_uq",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "remove_constraint")
eq_(diffs[0][1].name, "uq_name")
eq_(len(diffs), 1)
case = combinations(
("nulls_not_distinct=False", False),
("nulls_not_distinct=True", True),
("nulls_not_distinct=None", None),
argnames="case",
id_="ia",
)
name_type = combinations(
(
"index",
lambda value: Index(
"nnd_obj", "name", unique=True, postgresql_nulls_not_distinct=value
),
),
(
"constraint",
lambda value: UniqueConstraint(
"id", "name", name="nnd_obj", postgresql_nulls_not_distinct=value
),
),
argnames="name,type_",
id_="sa",
)
class PGNullsNotDistinctAutogenerateTest(AutogenFixtureTest, TestBase):
__requires__ = ("nulls_not_distinct_db",)
__only_on__ = "postgresql"
__backend__ = True
@case
@name_type
def test_add(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 1)
eq_(diffs[0][0], f"add_{name}")
added = diffs[0][1]
eq_(added.name, "nnd_obj")
eq_(added.dialect_kwargs["postgresql_nulls_not_distinct"], case)
@case
@name_type
def test_remove(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 1)
eq_(diffs[0][0], f"remove_{name}")
eq_(diffs[0][1].name, "nnd_obj")
@case
@name_type
def test_toggle_not_distinct(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
to = not case
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(to),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 2)
eq_(diffs[0][0], f"remove_{name}")
eq_(diffs[1][0], f"add_{name}")
eq_(diffs[1][1].name, "nnd_obj")
eq_(diffs[1][1].dialect_kwargs["postgresql_nulls_not_distinct"], to)
@case
@name_type
def test_no_change(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 0, str(diffs))
| from sqlalchemy import BigInteger
from sqlalchemy import Boolean
from sqlalchemy import Column
from sqlalchemy import DateTime
from sqlalchemy import exc
from sqlalchemy import Float
from sqlalchemy import func
from sqlalchemy import Index
from sqlalchemy import inspect
from sqlalchemy import Integer
from sqlalchemy import Interval
from sqlalchemy import MetaData
from sqlalchemy import Numeric
from sqlalchemy import Sequence
from sqlalchemy import String
from sqlalchemy import Table
from sqlalchemy import text
from sqlalchemy import types
from sqlalchemy import UniqueConstraint
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy.dialects.postgresql import BYTEA
from sqlalchemy.dialects.postgresql import ExcludeConstraint
from sqlalchemy.dialects.postgresql import HSTORE
from sqlalchemy.dialects.postgresql import JSON
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.dialects.postgresql import TSRANGE
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.sql import column
from sqlalchemy.sql import false
from sqlalchemy.sql import table
from sqlalchemy.sql.expression import literal_column
from alembic import autogenerate
from alembic import command
from alembic import op
from alembic import util
from alembic.autogenerate import api
from alembic.autogenerate.compare import _compare_server_default
from alembic.autogenerate.compare import _compare_tables
from alembic.autogenerate.compare import _render_server_default_for_compare
from alembic.migration import MigrationContext
from alembic.operations import ops
from alembic.script import ScriptDirectory
from alembic.testing import assert_raises_message
from alembic.testing import combinations
from alembic.testing import config
from alembic.testing import eq_
from alembic.testing import eq_ignore_whitespace
from alembic.testing import provide_metadata
from alembic.testing.env import _no_sql_testing_config
from alembic.testing.env import clear_staging_env
from alembic.testing.env import staging_env
from alembic.testing.env import write_script
from alembic.testing.fixtures import capture_context_buffer
from alembic.testing.fixtures import FutureEngineMixin
from alembic.testing.fixtures import op_fixture
from alembic.testing.fixtures import TablesTest
from alembic.testing.fixtures import TestBase
from alembic.testing.suite._autogen_fixtures import AutogenFixtureTest
from alembic.util import sqla_compat
class PostgresqlOpTest(TestBase):
def test_rename_table_postgresql(self):
context = op_fixture("postgresql")
op.rename_table("t1", "t2")
context.assert_("ALTER TABLE t1 RENAME TO t2")
def test_rename_table_schema_postgresql(self):
context = op_fixture("postgresql")
op.rename_table("t1", "t2", schema="foo")
context.assert_("ALTER TABLE foo.t1 RENAME TO t2")
def test_create_index_postgresql_expressions(self):
context = op_fixture("postgresql")
op.create_index(
"geocoded",
"locations",
[text("lower(coordinates)")],
postgresql_where=text("locations.coordinates != Null"),
)
context.assert_(
"CREATE INDEX geocoded ON locations (lower(coordinates)) "
"WHERE locations.coordinates != Null"
)
def test_create_index_postgresql_where(self):
context = op_fixture("postgresql")
op.create_index(
"geocoded",
"locations",
["coordinates"],
postgresql_where=text("locations.coordinates != Null"),
)
context.assert_(
"CREATE INDEX geocoded ON locations (coordinates) "
"WHERE locations.coordinates != Null"
)
def test_create_index_postgresql_concurrently(self):
context = op_fixture("postgresql")
op.create_index(
"geocoded",
"locations",
["coordinates"],
postgresql_concurrently=True,
)
context.assert_(
"CREATE INDEX CONCURRENTLY geocoded ON locations (coordinates)"
)
@config.requirements.sqlalchemy_14
def test_create_index_postgresql_include(self):
context = op_fixture("postgresql")
op.create_index(
"i", "t", ["c1", "c2"], unique=False, postgresql_include=["inc"]
)
context.assert_("CREATE INDEX i ON t (c1, c2) INCLUDE (inc)")
def test_create_index_postgresql_include_is_none(self):
context = op_fixture("postgresql")
op.create_index("i", "t", ["c1", "c2"], unique=False)
context.assert_("CREATE INDEX i ON t (c1, c2)")
@config.requirements.sqlalchemy_2
def test_create_index_postgresql_if_not_exists(self):
context = op_fixture("postgresql")
op.create_index("i", "t", ["c1", "c2"], if_not_exists=True)
context.assert_("CREATE INDEX IF NOT EXISTS i ON t (c1, c2)")
@config.combinations("include_table", "no_table", argnames="include_table")
def test_drop_index_postgresql_concurrently(self, include_table):
context = op_fixture("postgresql")
if include_table == "include_table":
op.drop_index(
"geocoded",
table_name="locations",
postgresql_concurrently=True,
)
else:
op.drop_index("geocoded", postgresql_concurrently=True)
context.assert_("DROP INDEX CONCURRENTLY geocoded")
@config.requirements.sqlalchemy_2
def test_drop_index_postgresql_if_exists(self):
context = op_fixture("postgresql")
op.drop_index("geocoded", if_exists=True)
context.assert_("DROP INDEX IF EXISTS geocoded")
def test_alter_column_type_using(self):
context = op_fixture("postgresql")
op.alter_column("t", "c", type_=Integer, postgresql_using="c::integer")
context.assert_(
"ALTER TABLE t ALTER COLUMN c TYPE INTEGER USING c::integer"
)
def test_col_w_pk_is_serial(self):
context = op_fixture("postgresql")
op.add_column("some_table", Column("q", Integer, primary_key=True))
context.assert_("ALTER TABLE some_table ADD COLUMN q SERIAL NOT NULL")
def test_create_exclude_constraint(self):
context = op_fixture("postgresql")
op.create_exclude_constraint(
"ex1", "t1", ("x", ">"), where="x > 5", using="gist"
)
context.assert_(
"ALTER TABLE t1 ADD CONSTRAINT ex1 EXCLUDE USING gist (x WITH >) "
"WHERE (x > 5)"
)
def test_drop_exclude_or_other_constraint(self):
context = op_fixture("postgresql")
op.drop_constraint("t_excl_x", "TTable", type_=None)
context.assert_('ALTER TABLE "TTable" DROP CONSTRAINT t_excl_x')
def test_create_exclude_constraint_quoted_literal(self):
context = op_fixture("postgresql")
op.create_exclude_constraint(
"ex1",
"SomeTable",
(column("SomeColumn"), ">"),
where='"SomeColumn" > 5',
using="gist",
)
context.assert_(
'ALTER TABLE "SomeTable" ADD CONSTRAINT ex1 EXCLUDE USING gist '
'("SomeColumn" WITH >) WHERE ("SomeColumn" > 5)'
)
def test_create_exclude_constraint_quoted_column(self):
context = op_fixture("postgresql")
op.create_exclude_constraint(
"ex1",
"SomeTable",
(column("SomeColumn"), ">"),
where=column("SomeColumn") > 5,
using="gist",
)
context.assert_(
'ALTER TABLE "SomeTable" ADD CONSTRAINT ex1 EXCLUDE '
'USING gist ("SomeColumn" WITH >) WHERE ("SomeColumn" > 5)'
)
def test_add_column_with_comment(self):
context = op_fixture("postgresql")
op.add_column("t", Column("q", Integer, comment="This is a comment"))
context.assert_(
"ALTER TABLE t ADD COLUMN q INTEGER",
"COMMENT ON COLUMN t.q IS 'This is a comment'",
)
def test_alter_column_with_comment(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
nullable=False,
existing_type=Boolean(),
schema="foo",
comment="This is a column comment",
)
context.assert_(
"ALTER TABLE foo.t ALTER COLUMN c SET NOT NULL",
"COMMENT ON COLUMN foo.t.c IS 'This is a column comment'",
)
def test_alter_column_add_comment(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
existing_type=Boolean(),
schema="foo",
comment="This is a column comment",
)
context.assert_(
"COMMENT ON COLUMN foo.t.c IS 'This is a column comment'"
)
def test_alter_column_add_comment_table_and_column_quoting(self):
context = op_fixture("postgresql")
op.alter_column(
"T",
"C",
existing_type=Boolean(),
schema="foo",
comment="This is a column comment",
)
context.assert_(
'COMMENT ON COLUMN foo."T"."C" IS \'This is a column comment\''
)
def test_alter_column_add_comment_quoting(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
existing_type=Boolean(),
schema="foo",
comment="This is a column 'comment'",
)
context.assert_(
"COMMENT ON COLUMN foo.t.c IS 'This is a column ''comment'''"
)
def test_alter_column_drop_comment(self):
context = op_fixture("postgresql")
op.alter_column(
"t",
"c",
existing_type=Boolean(),
schema="foo",
comment=None,
existing_comment="This is a column comment",
)
context.assert_("COMMENT ON COLUMN foo.t.c IS NULL")
def test_create_table_with_comment(self):
context = op_fixture("postgresql")
op.create_table(
"t2",
Column("c1", Integer, primary_key=True),
Column("c2", Integer),
comment="t2 comment",
)
context.assert_(
"CREATE TABLE t2 (c1 SERIAL NOT NULL, "
"c2 INTEGER, PRIMARY KEY (c1))",
"COMMENT ON TABLE t2 IS 't2 comment'",
)
def test_create_table_with_column_comments(self):
context = op_fixture("postgresql")
op.create_table(
"t2",
Column("c1", Integer, primary_key=True, comment="c1 comment"),
Column("c2", Integer, comment="c2 comment"),
comment="t2 comment",
)
context.assert_(
"CREATE TABLE t2 (c1 SERIAL NOT NULL, "
"c2 INTEGER, PRIMARY KEY (c1))",
"COMMENT ON TABLE t2 IS 't2 comment'",
"COMMENT ON COLUMN t2.c1 IS 'c1 comment'",
"COMMENT ON COLUMN t2.c2 IS 'c2 comment'",
)
def test_create_table_comment(self):
# this is handled by SQLAlchemy's compilers
context = op_fixture("postgresql")
op.create_table_comment("t2", comment="t2 table", schema="foo")
context.assert_("COMMENT ON TABLE foo.t2 IS 't2 table'")
def test_drop_table_comment(self):
# this is handled by SQLAlchemy's compilers
context = op_fixture("postgresql")
op.drop_table_comment("t2", existing_comment="t2 table", schema="foo")
context.assert_("COMMENT ON TABLE foo.t2 IS NULL")
@config.requirements.computed_columns
def test_add_column_computed(self):
context = op_fixture("postgresql")
op.add_column(
"t1",
Column("some_column", Integer, sqla_compat.Computed("foo * 5")),
)
context.assert_(
"ALTER TABLE t1 ADD COLUMN some_column "
"INTEGER GENERATED ALWAYS AS (foo * 5) STORED"
)
@combinations(
(lambda: sqla_compat.Computed("foo * 5"), lambda: None),
(lambda: None, lambda: sqla_compat.Computed("foo * 5")),
(
lambda: sqla_compat.Computed("foo * 42"),
lambda: sqla_compat.Computed("foo * 5"),
),
)
@config.requirements.computed_columns
def test_alter_column_computed_not_supported(self, sd, esd):
op_fixture("postgresql")
assert_raises_message(
exc.CompileError,
'Adding or removing a "computed" construct, e.g. '
"GENERATED ALWAYS AS, to or from an existing column is not "
"supported.",
op.alter_column,
"t1",
"c1",
server_default=sd(),
existing_server_default=esd(),
)
@config.requirements.identity_columns
@combinations(
({}, None),
(dict(always=True), None),
(
dict(start=3, increment=33, maxvalue=99, cycle=True),
"INCREMENT BY 33 START WITH 3 MAXVALUE 99 CYCLE",
),
)
def test_add_column_identity(self, kw, text):
context = op_fixture("postgresql")
op.add_column(
"t1",
Column("some_column", Integer, sqla_compat.Identity(**kw)),
)
qualification = "ALWAYS" if kw.get("always", False) else "BY DEFAULT"
options = " (%s)" % text if text else ""
context.assert_(
"ALTER TABLE t1 ADD COLUMN some_column "
"INTEGER GENERATED %s AS IDENTITY%s" % (qualification, options)
)
@config.requirements.identity_columns
@combinations(
({}, None),
(dict(always=True), None),
(
dict(start=3, increment=33, maxvalue=99, cycle=True),
"INCREMENT BY 33 START WITH 3 MAXVALUE 99 CYCLE",
),
)
def test_add_identity_to_column(self, kw, text):
context = op_fixture("postgresql")
op.alter_column(
"t1",
"some_column",
server_default=sqla_compat.Identity(**kw),
existing_server_default=None,
)
qualification = "ALWAYS" if kw.get("always", False) else "BY DEFAULT"
options = " (%s)" % text if text else ""
context.assert_(
"ALTER TABLE t1 ALTER COLUMN some_column ADD "
"GENERATED %s AS IDENTITY%s" % (qualification, options)
)
@config.requirements.identity_columns
def test_remove_identity_from_column(self):
context = op_fixture("postgresql")
op.alter_column(
"t1",
"some_column",
server_default=None,
existing_server_default=sqla_compat.Identity(),
)
context.assert_(
"ALTER TABLE t1 ALTER COLUMN some_column DROP IDENTITY"
)
@config.requirements.identity_columns
@combinations(
({}, dict(always=True), "SET GENERATED ALWAYS"),
(
dict(always=True),
dict(always=False, start=3),
"SET GENERATED BY DEFAULT SET START WITH 3",
),
(
dict(always=True, start=3, increment=2, minvalue=-3, maxvalue=99),
dict(
always=True,
start=3,
increment=1,
minvalue=-3,
maxvalue=99,
cycle=True,
),
"SET CYCLE SET INCREMENT BY 1",
),
(
dict(
always=False,
start=3,
maxvalue=9999,
minvalue=0,
),
dict(always=False, start=3, cache=2),
"SET CACHE 2",
),
(
dict(always=False),
dict(always=None, minvalue=0),
"SET MINVALUE 0",
),
)
def test_change_identity_in_column(self, existing, updated, text):
context = op_fixture("postgresql")
op.alter_column(
"t1",
"some_column",
server_default=sqla_compat.Identity(**updated),
existing_server_default=sqla_compat.Identity(**existing),
)
context.assert_("ALTER TABLE t1 ALTER COLUMN some_column %s" % text)
class PGAutocommitBlockTest(TestBase):
__only_on__ = "postgresql"
__backend__ = True
def setUp(self):
self.conn = conn = config.db.connect()
with conn.begin():
conn.execute(
text("CREATE TYPE mood AS ENUM ('sad', 'ok', 'happy')")
)
def tearDown(self):
with self.conn.begin():
self.conn.execute(text("DROP TYPE mood"))
def test_alter_enum(self, migration_context):
with migration_context.begin_transaction(_per_migration=True):
with migration_context.autocommit_block():
migration_context.execute(
text("ALTER TYPE mood ADD VALUE 'soso'")
)
class PGAutocommitBlockTestFuture(FutureEngineMixin, PGAutocommitBlockTest):
pass
class PGOfflineEnumTest(TestBase):
def setUp(self):
staging_env()
self.cfg = cfg = _no_sql_testing_config()
self.rid = rid = util.rev_id()
self.script = script = ScriptDirectory.from_config(cfg)
script.generate_revision(rid, None, refresh=True)
def tearDown(self):
clear_staging_env()
def _inline_enum_script(self):
write_script(
self.script,
self.rid,
"""
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy.dialects.postgresql import ENUM
from sqlalchemy import Column
def upgrade():
op.create_table("sometable",
Column("data", ENUM("one", "two", "three", name="pgenum"))
)
def downgrade():
op.drop_table("sometable")
"""
% self.rid,
)
def _distinct_enum_script(self):
write_script(
self.script,
self.rid,
"""
revision = '%s'
down_revision = None
from alembic import op
from sqlalchemy.dialects.postgresql import ENUM
from sqlalchemy import Column
def upgrade():
enum = ENUM("one", "two", "three", name="pgenum", create_type=False)
enum.create(op.get_bind(), checkfirst=False)
op.create_table("sometable",
Column("data", enum)
)
def downgrade():
op.drop_table("sometable")
ENUM(name="pgenum").drop(op.get_bind(), checkfirst=False)
"""
% self.rid,
)
def test_offline_inline_enum_create(self):
self._inline_enum_script()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.rid, sql=True)
assert (
"CREATE TYPE pgenum AS "
"ENUM ('one', 'two', 'three')" in buf.getvalue()
)
assert "CREATE TABLE sometable (\n data pgenum\n)" in buf.getvalue()
def test_offline_inline_enum_drop(self):
self._inline_enum_script()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, "%s:base" % self.rid, sql=True)
assert "DROP TABLE sometable" in buf.getvalue()
# no drop since we didn't emit events
assert "DROP TYPE pgenum" not in buf.getvalue()
def test_offline_distinct_enum_create(self):
self._distinct_enum_script()
with capture_context_buffer() as buf:
command.upgrade(self.cfg, self.rid, sql=True)
assert (
"CREATE TYPE pgenum AS ENUM "
"('one', 'two', 'three')" in buf.getvalue()
)
assert "CREATE TABLE sometable (\n data pgenum\n)" in buf.getvalue()
def test_offline_distinct_enum_drop(self):
self._distinct_enum_script()
with capture_context_buffer() as buf:
command.downgrade(self.cfg, "%s:base" % self.rid, sql=True)
assert "DROP TABLE sometable" in buf.getvalue()
assert "DROP TYPE pgenum" in buf.getvalue()
class PostgresqlInlineLiteralTest(TablesTest):
__only_on__ = "postgresql"
__backend__ = True
@classmethod
def define_tables(cls, metadata):
Table("tab", metadata, Column("col", String(50)))
@classmethod
def insert_data(cls, connection):
connection.execute(
text(
"""
insert into tab (col) values
('old data 1'),
('old data 2.1'),
('old data 3')
"""
)
)
def test_inline_percent(self, connection, ops_context):
# TODO: here's the issue, you need to escape this.
tab = table("tab", column("col"))
ops_context.execute(
tab.update()
.where(tab.c.col.like(ops_context.inline_literal("%.%")))
.values(col=ops_context.inline_literal("new data")),
execution_options={"no_parameters": True},
)
eq_(
connection.execute(
text("select count(*) from tab where col='new data'")
).scalar(),
1,
)
class PostgresqlDefaultCompareTest(TestBase):
__only_on__ = "postgresql"
__backend__ = True
@classmethod
def setup_class(cls):
cls.bind = config.db
staging_env()
cls.migration_context = MigrationContext.configure(
connection=cls.bind.connect(),
opts={"compare_type": True, "compare_server_default": True},
)
def setUp(self):
self.metadata = MetaData()
self.autogen_context = api.AutogenContext(self.migration_context)
@classmethod
def teardown_class(cls):
clear_staging_env()
def tearDown(self):
with config.db.begin() as conn:
self.metadata.drop_all(conn)
def _compare_default_roundtrip(
self, type_, orig_default, alternate=None, diff_expected=None
):
diff_expected = (
diff_expected
if diff_expected is not None
else alternate is not None
)
if alternate is None:
alternate = orig_default
t1 = Table(
"test",
self.metadata,
Column("somecol", type_, server_default=orig_default),
)
t2 = Table(
"test",
MetaData(),
Column("somecol", type_, server_default=alternate),
)
t1.create(self.bind)
insp = inspect(self.bind)
cols = insp.get_columns(t1.name)
insp_col = Column(
"somecol", cols[0]["type"], server_default=text(cols[0]["default"])
)
op = ops.AlterColumnOp("test", "somecol")
_compare_server_default(
self.autogen_context,
op,
None,
"test",
"somecol",
insp_col,
t2.c.somecol,
)
diffs = op.to_diff_tuple()
eq_(bool(diffs), diff_expected)
def _compare_default(self, t1, t2, col, rendered):
t1.create(self.bind, checkfirst=True)
insp = inspect(self.bind)
cols = insp.get_columns(t1.name)
ctx = self.autogen_context.migration_context
return ctx.impl.compare_server_default(
None, col, rendered, cols[0]["default"]
)
def test_compare_string_blank_default(self):
self._compare_default_roundtrip(String(8), "")
def test_compare_string_nonblank_default(self):
self._compare_default_roundtrip(String(8), "hi")
def test_compare_interval_str(self):
# this form shouldn't be used but testing here
# for compatibility
self._compare_default_roundtrip(Interval, "14 days")
@config.requirements.postgresql_uuid_ossp
def test_compare_uuid_text(self):
self._compare_default_roundtrip(UUID, text("uuid_generate_v4()"))
def test_compare_interval_text(self):
self._compare_default_roundtrip(Interval, text("'14 days'"))
def test_compare_array_of_integer_text(self):
self._compare_default_roundtrip(
ARRAY(Integer), text("(ARRAY[]::integer[])")
)
def test_compare_current_timestamp_text(self):
self._compare_default_roundtrip(
DateTime(), text("TIMEZONE('utc', CURRENT_TIMESTAMP)")
)
def test_compare_current_timestamp_fn_w_binds(self):
self._compare_default_roundtrip(
DateTime(), func.timezone("utc", func.current_timestamp())
)
def test_compare_integer_str(self):
self._compare_default_roundtrip(Integer(), "5")
def test_compare_integer_text(self):
self._compare_default_roundtrip(Integer(), text("5"))
def test_compare_integer_text_diff(self):
self._compare_default_roundtrip(Integer(), text("5"), "7")
def test_compare_float_str(self):
self._compare_default_roundtrip(Float(), "5.2")
def test_compare_float_text(self):
self._compare_default_roundtrip(Float(), text("5.2"))
def test_compare_float_no_diff1(self):
self._compare_default_roundtrip(
Float(), text("5.2"), "5.2", diff_expected=False
)
def test_compare_float_no_diff2(self):
self._compare_default_roundtrip(
Float(), "5.2", text("5.2"), diff_expected=False
)
def test_compare_float_no_diff3(self):
self._compare_default_roundtrip(
Float(), text("5"), text("5.0"), diff_expected=False
)
def test_compare_float_no_diff4(self):
self._compare_default_roundtrip(
Float(), "5", "5.0", diff_expected=False
)
def test_compare_float_no_diff5(self):
self._compare_default_roundtrip(
Float(), text("5"), "5.0", diff_expected=False
)
def test_compare_float_no_diff6(self):
self._compare_default_roundtrip(
Float(), "5", text("5.0"), diff_expected=False
)
def test_compare_numeric_no_diff(self):
self._compare_default_roundtrip(
Numeric(), text("5"), "5.0", diff_expected=False
)
def test_compare_unicode_literal(self):
self._compare_default_roundtrip(String(), "im a default")
# TODO: will need to actually eval() the repr() and
# spend more effort figuring out exactly the kind of expression
# to use
def _TODO_test_compare_character_str_w_singlequote(self):
self._compare_default_roundtrip(String(), "hel''lo")
def test_compare_character_str(self):
self._compare_default_roundtrip(String(), "hello")
def test_compare_character_text(self):
self._compare_default_roundtrip(String(), text("'hello'"))
def test_compare_character_str_diff(self):
self._compare_default_roundtrip(String(), "hello", "there")
def test_compare_character_text_diff(self):
self._compare_default_roundtrip(
String(), text("'hello'"), text("'there'")
)
def test_primary_key_skip(self):
"""Test that SERIAL cols are just skipped"""
t1 = Table(
"sometable", self.metadata, Column("id", Integer, primary_key=True)
)
t2 = Table(
"sometable", MetaData(), Column("id", Integer, primary_key=True)
)
assert not self._compare_default(t1, t2, t2.c.id, "")
class PostgresqlDetectSerialTest(TestBase):
__only_on__ = "postgresql"
__backend__ = True
@classmethod
def setup_class(cls):
cls.bind = config.db
staging_env()
def setUp(self):
self.conn = self.bind.connect()
self.migration_context = MigrationContext.configure(
connection=self.conn,
opts={"compare_type": True, "compare_server_default": True},
)
self.autogen_context = api.AutogenContext(self.migration_context)
def tearDown(self):
self.conn.close()
@classmethod
def teardown_class(cls):
clear_staging_env()
@provide_metadata
def _expect_default(self, c_expected, col, seq=None):
Table("t", self.metadata, col)
self.autogen_context.metadata = self.metadata
if seq:
seq._set_metadata(self.metadata)
self.metadata.create_all(config.db)
insp = inspect(config.db)
uo = ops.UpgradeOps(ops=[])
_compare_tables({(None, "t")}, set(), insp, uo, self.autogen_context)
diffs = uo.as_diffs()
tab = diffs[0][1]
eq_(
_render_server_default_for_compare(
tab.c.x.server_default, self.autogen_context
),
c_expected,
)
insp = inspect(config.db)
uo = ops.UpgradeOps(ops=[])
m2 = MetaData()
Table("t", m2, Column("x", BigInteger()))
self.autogen_context.metadata = m2
_compare_tables(
{(None, "t")},
{(None, "t")},
insp,
uo,
self.autogen_context,
)
diffs = uo.as_diffs()
server_default = diffs[0][0][4]["existing_server_default"]
eq_(
_render_server_default_for_compare(
server_default, self.autogen_context
),
c_expected,
)
def test_serial(self):
self._expect_default(None, Column("x", Integer, primary_key=True))
def test_separate_seq(self):
seq = Sequence("x_id_seq")
self._expect_default(
"nextval('x_id_seq'::regclass)",
Column(
"x", Integer, server_default=seq.next_value(), primary_key=True
),
seq,
)
def test_numeric(self):
seq = Sequence("x_id_seq")
self._expect_default(
"nextval('x_id_seq'::regclass)",
Column(
"x",
Numeric(8, 2),
server_default=seq.next_value(),
primary_key=True,
),
seq,
)
def test_no_default(self):
self._expect_default(
None, Column("x", Integer, autoincrement=False, primary_key=True)
)
class PostgresqlAutogenRenderTest(TestBase):
def setUp(self):
ctx_opts = {
"sqlalchemy_module_prefix": "sa.",
"alembic_module_prefix": "op.",
"target_metadata": MetaData(),
}
context = MigrationContext.configure(
dialect_name="postgresql", opts=ctx_opts
)
self.autogen_context = api.AutogenContext(context)
def test_render_add_index_pg_where(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table("t", m, Column("x", String), Column("y", String))
idx = Index(
"foo_idx", t.c.x, t.c.y, postgresql_where=(t.c.y == "something")
)
op_obj = ops.CreateIndexOp.from_index(idx)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"""op.create_index('foo_idx', 't', \
['x', 'y'], unique=False, """
"""postgresql_where=sa.text("y = 'something'"))""",
)
def test_render_server_default_native_boolean(self):
c = Column(
"updated_at", Boolean(), server_default=false(), nullable=False
)
result = autogenerate.render._render_column(c, self.autogen_context)
eq_ignore_whitespace(
result,
"sa.Column('updated_at', sa.Boolean(), "
"server_default=sa.text('false'), "
"nullable=False)",
)
def test_postgresql_array_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(Integer), self.autogen_context
),
"postgresql.ARRAY(sa.Integer())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(DateTime(timezone=True)), self.autogen_context
),
"postgresql.ARRAY(sa.DateTime(timezone=True))",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(BYTEA, as_tuple=True, dimensions=2), self.autogen_context
),
"postgresql.ARRAY(postgresql.BYTEA(), "
"as_tuple=True, dimensions=2)",
)
assert (
"from sqlalchemy.dialects import postgresql"
in self.autogen_context.imports
)
def test_postgresql_hstore_subtypes(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(HSTORE(), self.autogen_context),
"postgresql.HSTORE(text_type=sa.Text())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
HSTORE(text_type=String()), self.autogen_context
),
"postgresql.HSTORE(text_type=sa.String())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
HSTORE(text_type=BYTEA()), self.autogen_context
),
"postgresql.HSTORE(text_type=postgresql.BYTEA())",
)
assert (
"from sqlalchemy.dialects import postgresql"
in self.autogen_context.imports
)
def test_generic_array_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(
types.ARRAY(Integer), self.autogen_context
),
"sa.ARRAY(sa.Integer())",
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
types.ARRAY(DateTime(timezone=True)), self.autogen_context
),
"sa.ARRAY(sa.DateTime(timezone=True))",
)
assert (
"from sqlalchemy.dialects import postgresql"
not in self.autogen_context.imports
)
eq_ignore_whitespace(
autogenerate.render._repr_type(
types.ARRAY(BYTEA, as_tuple=True, dimensions=2),
self.autogen_context,
),
"sa.ARRAY(postgresql.BYTEA(), as_tuple=True, dimensions=2)",
)
assert (
"from sqlalchemy.dialects import postgresql"
in self.autogen_context.imports
)
def test_array_type_user_defined_inner(self):
def repr_type(typestring, object_, autogen_context):
if typestring == "type" and isinstance(object_, String):
return "foobar.MYVARCHAR"
else:
return False
self.autogen_context.opts.update(render_item=repr_type)
eq_ignore_whitespace(
autogenerate.render._repr_type(
ARRAY(String), self.autogen_context
),
"postgresql.ARRAY(foobar.MYVARCHAR)",
)
def test_add_exclude_constraint(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table("t", m, Column("x", String), Column("y", String))
op_obj = ops.AddConstraintOp.from_constraint(
ExcludeConstraint(
(t.c.x, ">"), where=t.c.x != 2, using="gist", name="t_excl_x"
)
)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_exclude_constraint('t_excl_x', "
"'t', (sa.column('x'), '>'), "
"where=sa.text('x != 2'), using='gist')",
)
def test_add_exclude_constraint_case_sensitive(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTAble", m, Column("XColumn", String), Column("YColumn", String)
)
op_obj = ops.AddConstraintOp.from_constraint(
ExcludeConstraint(
(t.c.XColumn, ">"),
where=t.c.XColumn != 2,
using="gist",
name="t_excl_x",
)
)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_exclude_constraint('t_excl_x', 'TTAble', "
"(sa.column('XColumn'), '>'), "
"where=sa.text('\"XColumn\" != 2'), using='gist')",
)
def test_inline_exclude_constraint(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"t",
m,
Column("x", String),
Column("y", String),
ExcludeConstraint(
(column("x"), ">"),
using="gist",
where="x != 2",
name="t_excl_x",
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('t',sa.Column('x', sa.String(), nullable=True),"
"sa.Column('y', sa.String(), nullable=True),"
"postgresql.ExcludeConstraint((sa.column('x'), '>'), "
"where=sa.text('x != 2'), using='gist', name='t_excl_x')"
")",
)
def test_inline_exclude_constraint_case_sensitive(self):
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable", m, Column("XColumn", String), Column("YColumn", String)
)
ExcludeConstraint(
(t.c.XColumn, ">"),
using="gist",
where='"XColumn" != 2',
name="TExclX",
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('XColumn', sa.String(), "
"nullable=True),"
"sa.Column('YColumn', sa.String(), nullable=True),"
"postgresql.ExcludeConstraint((sa.column('XColumn'), '>'), "
"where=sa.text('\"XColumn\" != 2'), using='gist', "
"name='TExclX'))",
)
def test_inline_exclude_constraint_literal_column(self):
"""test for #1184"""
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable",
m,
Column("id", String()),
ExcludeConstraint(
(literal_column("id + 2"), "="), name="TExclID", using="gist"
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('id', sa.String(), "
"nullable=True),"
"postgresql.ExcludeConstraint((sa.literal_column('id + 2'), '='), "
"using='gist', "
"name='TExclID'))",
)
@config.requirements.sqlalchemy_2
def test_inline_exclude_constraint_fn(self):
"""test for #1230"""
autogen_context = self.autogen_context
effective_time = Column("effective_time", DateTime(timezone=True))
expiry_time = Column("expiry_time", DateTime(timezone=True))
m = MetaData()
t = Table(
"TTable",
m,
effective_time,
expiry_time,
ExcludeConstraint(
(func.tstzrange(effective_time, expiry_time), "&&"),
using="gist",
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('effective_time', "
"sa.DateTime(timezone=True), nullable=True),"
"sa.Column('expiry_time', sa.DateTime(timezone=True), "
"nullable=True),postgresql.ExcludeConstraint("
"(sa.text('tstzrange(effective_time, expiry_time)'), "
"'&&'), using='gist'))",
)
@config.requirements.sqlalchemy_2
def test_inline_exclude_constraint_text(self):
"""test for #1184.
Requires SQLAlchemy 2.0.5 due to issue
https://github.com/sqlalchemy/sqlalchemy/issues/9401
"""
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable",
m,
Column("id", String()),
ExcludeConstraint(
(text("id + 2"), "="), name="TExclID", using="gist"
),
)
op_obj = ops.CreateTableOp.from_table(t)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.create_table('TTable',sa.Column('id', sa.String(), "
"nullable=True),"
"postgresql.ExcludeConstraint((sa.text('id + 2'), '='), "
"using='gist', "
"name='TExclID'))",
)
def test_drop_exclude_constraint(self):
"""test for #1300"""
autogen_context = self.autogen_context
m = MetaData()
t = Table(
"TTable", m, Column("XColumn", String), Column("YColumn", String)
)
op_obj = ops.DropConstraintOp.from_constraint(
ExcludeConstraint(
(t.c.XColumn, ">"),
where=t.c.XColumn != 2,
using="gist",
name="t_excl_x",
)
)
eq_ignore_whitespace(
autogenerate.render_op_text(autogen_context, op_obj),
"op.drop_constraint('t_excl_x', 'TTable')",
)
def test_json_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(JSON(), self.autogen_context),
"postgresql.JSON(astext_type=sa.Text())",
)
def test_jsonb_type(self):
eq_ignore_whitespace(
autogenerate.render._repr_type(JSONB(), self.autogen_context),
"postgresql.JSONB(astext_type=sa.Text())",
)
@config.requirements.nulls_not_distinct_sa
def test_render_unique_nulls_not_distinct_constraint(self):
m = MetaData()
t = Table("tbl", m, Column("c", Integer))
uc = UniqueConstraint(
t.c.c,
name="uq_1",
deferrable="XYZ",
postgresql_nulls_not_distinct=True,
)
eq_ignore_whitespace(
autogenerate.render.render_op_text(
self.autogen_context,
ops.AddConstraintOp.from_constraint(uc),
),
"op.create_unique_constraint('uq_1', 'tbl', ['c'], "
"deferrable='XYZ', postgresql_nulls_not_distinct=True)",
)
eq_ignore_whitespace(
autogenerate.render._render_unique_constraint(
uc, self.autogen_context, None
),
"sa.UniqueConstraint('c', deferrable='XYZ', name='uq_1', "
"postgresql_nulls_not_distinct=True)",
)
@config.requirements.nulls_not_distinct_sa
def test_render_index_nulls_not_distinct_constraint(self):
m = MetaData()
t = Table("tbl", m, Column("c", Integer))
idx = Index("ix_42", t.c.c, postgresql_nulls_not_distinct=False)
eq_ignore_whitespace(
autogenerate.render.render_op_text(
self.autogen_context, ops.CreateIndexOp.from_index(idx)
),
"op.create_index('ix_42', 'tbl', ['c'], unique=False, "
"postgresql_nulls_not_distinct=False)",
)
class PGUniqueIndexAutogenerateTest(AutogenFixtureTest, TestBase):
__only_on__ = "postgresql"
__backend__ = True
def test_idx_added_schema(self):
m1 = MetaData()
m2 = MetaData()
Table("add_ix", m1, Column("x", String(50)), schema="test_schema")
Table(
"add_ix",
m2,
Column("x", String(50)),
Index("ix_1", "x"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "add_index")
eq_(diffs[0][1].name, "ix_1")
def test_idx_unchanged_schema(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_ix",
m1,
Column("x", String(50)),
Index("ix_1", "x"),
schema="test_schema",
)
Table(
"add_ix",
m2,
Column("x", String(50)),
Index("ix_1", "x"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs, [])
def test_uq_added_schema(self):
m1 = MetaData()
m2 = MetaData()
Table("add_uq", m1, Column("x", String(50)), schema="test_schema")
Table(
"add_uq",
m2,
Column("x", String(50)),
UniqueConstraint("x", name="ix_1"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "add_constraint")
eq_(diffs[0][1].name, "ix_1")
def test_uq_unchanged_schema(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_uq",
m1,
Column("x", String(50)),
UniqueConstraint("x", name="ix_1"),
schema="test_schema",
)
Table(
"add_uq",
m2,
Column("x", String(50)),
UniqueConstraint("x", name="ix_1"),
schema="test_schema",
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs, [])
@config.requirements.btree_gist
def test_exclude_const_unchanged(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_excl",
m1,
Column("id", Integer, primary_key=True),
Column("period", TSRANGE),
ExcludeConstraint(("period", "&&"), name="quarters_period_excl"),
)
Table(
"add_excl",
m2,
Column("id", Integer, primary_key=True),
Column("period", TSRANGE),
ExcludeConstraint(("period", "&&"), name="quarters_period_excl"),
)
diffs = self._fixture(m1, m2)
eq_(diffs, [])
def test_same_tname_two_schemas(self):
m1 = MetaData()
m2 = MetaData()
Table("add_ix", m1, Column("x", String(50)), Index("ix_1", "x"))
Table("add_ix", m2, Column("x", String(50)), Index("ix_1", "x"))
Table("add_ix", m2, Column("x", String(50)), schema="test_schema")
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "add_table")
eq_(len(diffs), 1)
def test_uq_dropped(self):
m1 = MetaData()
m2 = MetaData()
Table(
"add_uq",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
UniqueConstraint("name", name="uq_name"),
)
Table(
"add_uq",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
)
diffs = self._fixture(m1, m2, include_schemas=True)
eq_(diffs[0][0], "remove_constraint")
eq_(diffs[0][1].name, "uq_name")
eq_(len(diffs), 1)
case = combinations(
("nulls_not_distinct=False", False),
("nulls_not_distinct=True", True),
("nulls_not_distinct=None", None),
argnames="case",
id_="ia",
)
name_type = combinations(
(
"index",
lambda value: Index(
"nnd_obj", "name", unique=True, postgresql_nulls_not_distinct=value
),
),
(
"constraint",
lambda value: UniqueConstraint(
"id", "name", name="nnd_obj", postgresql_nulls_not_distinct=value
),
),
argnames="name,type_",
id_="sa",
)
class PGNullsNotDistinctAutogenerateTest(AutogenFixtureTest, TestBase):
__requires__ = ("nulls_not_distinct_db",)
__only_on__ = "postgresql"
__backend__ = True
@case
@name_type
def test_add(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 1)
eq_(diffs[0][0], f"add_{name}")
added = diffs[0][1]
eq_(added.name, "nnd_obj")
eq_(added.dialect_kwargs["postgresql_nulls_not_distinct"], case)
@case
@name_type
def test_remove(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 1)
eq_(diffs[0][0], f"remove_{name}")
eq_(diffs[0][1].name, "nnd_obj")
@case
@name_type
def test_toggle_not_distinct(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
to = not case
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(to),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 2)
eq_(diffs[0][0], f"remove_{name}")
eq_(diffs[1][0], f"add_{name}")
eq_(diffs[1][1].name, "nnd_obj")
eq_(diffs[1][1].dialect_kwargs["postgresql_nulls_not_distinct"], to)
@case
@name_type
def test_no_change(self, case, name, type_):
m1 = MetaData()
m2 = MetaData()
Table(
"tbl",
m1,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
Table(
"tbl",
m2,
Column("id", Integer, primary_key=True),
Column("name", String),
type_(case),
)
diffs = self._fixture(m1, m2)
eq_(len(diffs), 0, str(diffs))
| jsoref | 74e5669297153bea01fd3685427e35306738c278 | 8542a09459daa9a75a73ab8e4c109686255e4f34 | notably this change means that tools that look for `TODO` will find it... | jsoref | 9 |
sqlalchemy/alembic | 994 | add GitHub URL for PyPi | <!-- Provide a general summary of your proposed changes in the Title field above -->
### Description
<!-- Describe your changes in detail -->
Warehouse now uses the project_urls provided to display links in the sidebar on [this screen](https://pypi.org/project/requests/), as well as including them in API responses to help automation tool find the source code for Requests.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2022-02-28 14:03:53+00:00 | 2022-02-28 20:08:56+00:00 | setup.cfg | [metadata]
name = alembic
# version comes from setup.py; setuptools
# can't read the "attr:" here without importing
# until version 47.0.0 which is too recent
description = A database migration tool for SQLAlchemy.
long_description = file: README.rst
long_description_content_type = text/x-rst
url=https://alembic.sqlalchemy.org
author = Mike Bayer
author_email = mike_mp@zzzcomputing.com
license = MIT
license_file = LICENSE
classifiers =
Development Status :: 5 - Production/Stable
Intended Audience :: Developers
Environment :: Console
License :: OSI Approved :: MIT License
Operating System :: OS Independent
Programming Language :: Python
Programming Language :: Python :: 3
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: Implementation :: PyPy
Topic :: Database :: Front-Ends
[options]
packages = find:
include_package_data = true
zip_safe = false
python_requires = >=3.6
install_requires =
SQLAlchemy>=1.3.0
Mako
importlib-metadata;python_version<"3.9"
importlib-resources;python_version<"3.9"
[options.extras_require]
tz =
python-dateutil
[options.package_data]
alembic = *.pyi, py.typed
[options.packages.find]
exclude =
test*
examples*
[options.exclude_package_data]
'' = test*
[options.entry_points]
console_scripts =
alembic = alembic.config:main
[egg_info]
tag_build=dev
[upload_docs]
upload-dir = docs/build/output/html
[upload]
sign = 1
identity = C4DAFEE1
[nosetests]
with-sqla_testing = true
where = tests
[flake8]
enable-extensions = G
# E203 is due to https://github.com/PyCQA/pycodestyle/issues/373
ignore =
A003,
D,
E203,E305,E711,E712,E721,E722,E741,
N801,N802,N806,
RST304,RST303,RST299,RST399,
W503,W504
exclude = .venv,.git,.tox,dist,doc,*egg,build
import-order-style = google
application-import-names = alembic,tests
per-file-ignores =
**/__init__.py:F401
max-line-length = 79
[sqla_testing]
requirement_cls=tests.requirements:DefaultRequirements
profile_file=tests/profiles.txt
[db]
default=sqlite:///:memory:
sqlite=sqlite:///:memory:
sqlite_file=sqlite:///querytest.db
postgresql=postgresql://scott:tiger@127.0.0.1:5432/test
mysql=mysql://scott:tiger@127.0.0.1:3306/test?charset=utf8mb4
mariadb = mariadb://scott:tiger@127.0.0.1:3306/test?charset=utf8mb4
mssql = mssql+pyodbc://scott:tiger^5HHH@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server
oracle=oracle://scott:tiger@127.0.0.1:1521
oracle8=oracle://scott:tiger@127.0.0.1:1521/?use_ansi=0
[alembic]
[tool:pytest]
addopts= --tb native -v -r sfxX -p no:warnings -p no:logging --maxfail=25
python_files=tests/test_*.py
[mypy]
show_error_codes = True
allow_redefinition = True
[mypy-mako.*]
ignore_missing_imports = True
[mypy-sqlalchemy.testing.*]
ignore_missing_imports = True
[mypy-importlib_resources.*]
ignore_missing_imports = True
[mypy-importlib_metadata.*]
ignore_missing_imports = True
| [metadata]
name = alembic
# version comes from setup.py; setuptools
# can't read the "attr:" here without importing
# until version 47.0.0 which is too recent
description = A database migration tool for SQLAlchemy.
long_description = file: README.rst
long_description_content_type = text/x-rst
url=https://alembic.sqlalchemy.org
author = Mike Bayer
author_email = mike_mp@zzzcomputing.com
license = MIT
license_file = LICENSE
classifiers =
Development Status :: 5 - Production/Stable
Intended Audience :: Developers
Environment :: Console
License :: OSI Approved :: MIT License
Operating System :: OS Independent
Programming Language :: Python
Programming Language :: Python :: 3
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: Implementation :: PyPy
Topic :: Database :: Front-Ends
project_urls =
Source = https://github.com/sqlalchemy/alembic
[options]
packages = find:
include_package_data = true
zip_safe = false
python_requires = >=3.6
install_requires =
SQLAlchemy>=1.3.0
Mako
importlib-metadata;python_version<"3.9"
importlib-resources;python_version<"3.9"
[options.extras_require]
tz =
python-dateutil
[options.package_data]
alembic = *.pyi, py.typed
[options.packages.find]
exclude =
test*
examples*
[options.exclude_package_data]
'' = test*
[options.entry_points]
console_scripts =
alembic = alembic.config:main
[egg_info]
tag_build=dev
[upload_docs]
upload-dir = docs/build/output/html
[upload]
sign = 1
identity = C4DAFEE1
[nosetests]
with-sqla_testing = true
where = tests
[flake8]
enable-extensions = G
# E203 is due to https://github.com/PyCQA/pycodestyle/issues/373
ignore =
A003,
D,
E203,E305,E711,E712,E721,E722,E741,
N801,N802,N806,
RST304,RST303,RST299,RST399,
W503,W504
exclude = .venv,.git,.tox,dist,doc,*egg,build
import-order-style = google
application-import-names = alembic,tests
per-file-ignores =
**/__init__.py:F401
max-line-length = 79
[sqla_testing]
requirement_cls=tests.requirements:DefaultRequirements
profile_file=tests/profiles.txt
[db]
default=sqlite:///:memory:
sqlite=sqlite:///:memory:
sqlite_file=sqlite:///querytest.db
postgresql=postgresql://scott:tiger@127.0.0.1:5432/test
mysql=mysql://scott:tiger@127.0.0.1:3306/test?charset=utf8mb4
mariadb = mariadb://scott:tiger@127.0.0.1:3306/test?charset=utf8mb4
mssql = mssql+pyodbc://scott:tiger^5HHH@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server
oracle=oracle://scott:tiger@127.0.0.1:1521
oracle8=oracle://scott:tiger@127.0.0.1:1521/?use_ansi=0
[alembic]
[tool:pytest]
addopts= --tb native -v -r sfxX -p no:warnings -p no:logging --maxfail=25
python_files=tests/test_*.py
[mypy]
show_error_codes = True
allow_redefinition = True
[mypy-mako.*]
ignore_missing_imports = True
[mypy-sqlalchemy.testing.*]
ignore_missing_imports = True
[mypy-importlib_resources.*]
ignore_missing_imports = True
[mypy-importlib_metadata.*]
ignore_missing_imports = True
| andriyor | 1846dddd993bc81c9a6a3f143819ee5ac0c84faf | 05c56c38f2ff08dca00097e7e160128727d49ba3 | thanks | CaselIT | 10 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | since we are looking to make this clearer, I dont understand this sentence, "it should be set, if multiple paths are used". this value is always "set" to something even if it's not in the config file. do you mean to say "it should be changed" ? if so, what should it be changed to? maybe instead a line like, "this should be changed to reflect the path separator used for the version_locations configuration variable". otherwise this doesnt seem to make it any clearer. | zzzeek | 11 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | I admit, I did get somewhat confused by note that `os.pathsep` is default in docs, but it is actually a space in code. From the code I got the impression that `version_path_separator` unset is discouraged backward compatible case. Did I understand it incorrectly?
| ziima | 12 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | yes that is correct! OK so the confusion you had was, "if omitted, defaults to "space", which is legacy"
so how about we make this more clear like this
```
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses
# os.pathsep. If this key is omitted entirely, falls back to the legacy behavior
# of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # uses os.pathsep.
``` | zzzeek | 13 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | OK, I fixed that.
Also, shouldn't the legacy version trigger a deprecation warning? I could add it in a separate MR. | ziima | 14 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | I think it may make sense, what's your opinion @zzzeek ?
as a side note, do we have deprecations warnings on alembic? | CaselIT | 15 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | I found this warning https://github.com/sqlalchemy/alembic/blob/main/alembic/util/langhelpers.py#L125-L128 which looks like a deprecation, although it's not marked as such.
_Edit: It doesn't seem to be used anywhere, but I might have missed something._ | ziima | 16 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | we dont have deprecation warning mechanics set up in alembic , and I dont see that it's necessary to remove the "legacy" path separator style. maybe it's cleaner just to call it "legacy" and have it be an entry in the config? if we were to totally remove the element, what would the default be? | zzzeek | 17 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | From the docs, it seemed to me that `os` is to be the new default. | ziima | 18 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | it's the default, *in the config file*. people that start new alembic projects will see this parameter is already in the config file. so there's really two levels of "default" for this kind of thing in Alembic, there's, "what we have in the pre-fab templates" and then there's "what we do if the config value is missing entirely". this should also be worked into whatever language we do here.
Really if we're going to depreacte the old version I would kind of want to raise an error if this parameter isn't in the config, because we want it to be explicit. | zzzeek | 19 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | alembic/templates/generic/alembic.ini.mako | # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| # A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = ${script_location}
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
[post_write_hooks]
# post_write_hooks defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner, against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | OK, I leave it be. Otherwise, this MR should be complete. | ziima | 20 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | docs/build/branches.rst | .. _branches:
Working with Branches
=====================
A **branch** describes a point in a migration stream when two or more
versions refer to the same parent migration as their anscestor. Branches
occur naturally when two divergent source trees, both containing Alembic
revision files created independently within those source trees, are merged
together into one. When this occurs, the challenge of a branch is to **merge** the
branches into a single series of changes, so that databases established
from either source tree individually can be upgraded to reference the merged
result equally. Another scenario where branches are present are when we create them
directly; either at some point in the migration stream we'd like different
series of migrations to be managed independently (e.g. we create a tree),
or we'd like separate migration streams for different features starting
at the root (e.g. a *forest*). We'll illustrate all of these cases, starting
with the most common which is a source-merge-originated branch that we'll
merge.
Starting with the "account table" example we began in :ref:`create_migration`,
assume we have our basemost version ``1975ea83b712``, which leads into
the second revision ``ae1027a6acf``, and the migration files for these
two revisions are checked into our source repository.
Consider if we merged into our source repository another code branch which contained
a revision for another table called ``shopping_cart``. This revision was made
against our first Alembic revision, the one that generated ``account``. After
loading the second source tree in, a new file
``27c6a30d7c24_add_shopping_cart_table.py`` exists within our ``versions`` directory.
Both it, as well as ``ae1027a6acf_add_a_column.py``, reference
``1975ea83b712_add_account_table.py`` as the "downgrade" revision. To illustrate::
# main source tree:
1975ea83b712 (create account table) -> ae1027a6acf (add a column)
# branched source tree
1975ea83b712 (create account table) -> 27c6a30d7c24 (add shopping cart table)
Above, we can see ``1975ea83b712`` is our **branch point**; two distinct versions
both refer to it as its parent. The Alembic command ``branches`` illustrates
this fact::
$ alembic branches --verbose
Rev: 1975ea83b712 (branchpoint)
Parent: <base>
Branches into: 27c6a30d7c24, ae1027a6acf
Path: foo/versions/1975ea83b712_add_account_table.py
create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2014-11-20 13:02:46.257104
-> 27c6a30d7c24 (head), add shopping cart table
-> ae1027a6acf (head), add a column
History shows it too, illustrating two ``head`` entries as well
as a ``branchpoint``::
$ alembic history
1975ea83b712 -> 27c6a30d7c24 (head), add shopping cart table
1975ea83b712 -> ae1027a6acf (head), add a column
<base> -> 1975ea83b712 (branchpoint), create account table
We can get a view of just the current heads using ``alembic heads``::
$ alembic heads --verbose
Rev: 27c6a30d7c24 (head)
Parent: 1975ea83b712
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
Rev: ae1027a6acf (head)
Parent: 1975ea83b712
Path: foo/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
If we try to run an ``upgrade`` to the usual end target of ``head``, Alembic no
longer considers this to be an unambiguous command. As we have more than
one ``head``, the ``upgrade`` command wants us to provide more information::
$ alembic upgrade head
FAILED: Multiple head revisions are present for given argument 'head'; please specify a specific
target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads
The ``upgrade`` command gives us quite a few options in which we can proceed
with our upgrade, either giving it information on *which* head we'd like to upgrade
towards, or alternatively stating that we'd like *all* heads to be upgraded
towards at once. However, in the typical case of two source trees being
merged, we will want to pursue a third option, which is that we can **merge** these
branches.
Merging Branches
----------------
An Alembic merge is a migration file that joins two or
more "head" files together. If the two branches we have right now can
be said to be a "tree" structure, introducing this merge file will
turn it into a "diamond" structure::
-- ae1027a6acf -->
/ \
<base> --> 1975ea83b712 --> --> mergepoint
\ /
-- 27c6a30d7c24 -->
We create the merge file using ``alembic merge``; with this command, we can
pass to it an argument such as ``heads``, meaning we'd like to merge all
heads. Or, we can pass it individual revision numbers sequentally::
$ alembic merge -m "merge ae1 and 27c" ae1027 27c6a
Generating /path/to/foo/versions/53fffde5ad5_merge_ae1_and_27c.py ... done
Looking inside the new file, we see it as a regular migration file, with
the only new twist is that ``down_revision`` points to both revisions::
"""merge ae1 and 27c
Revision ID: 53fffde5ad5
Revises: ae1027a6acf, 27c6a30d7c24
Create Date: 2014-11-20 13:31:50.811663
"""
# revision identifiers, used by Alembic.
revision = '53fffde5ad5'
down_revision = ('ae1027a6acf', '27c6a30d7c24')
branch_labels = None
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass
This file is a regular migration file, and if we wish to, we may place
:class:`.Operations` directives into the ``upgrade()`` and ``downgrade()``
functions like any other migration file. Though it is probably best to limit
the instructions placed here only to those that deal with any kind of
reconciliation that is needed between the two merged branches, if any.
The ``heads`` command now illustrates that the multiple heads in our
``versions/`` directory have been resolved into our new head::
$ alembic heads --verbose
Rev: 53fffde5ad5 (head) (mergepoint)
Merges: ae1027a6acf, 27c6a30d7c24
Path: foo/versions/53fffde5ad5_merge_ae1_and_27c.py
merge ae1 and 27c
Revision ID: 53fffde5ad5
Revises: ae1027a6acf, 27c6a30d7c24
Create Date: 2014-11-20 13:31:50.811663
History shows a similar result, as the mergepoint becomes our head::
$ alembic history
ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5 (head) (mergepoint), merge ae1 and 27c
1975ea83b712 -> ae1027a6acf, add a column
1975ea83b712 -> 27c6a30d7c24, add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
With a single ``head`` target, a generic ``upgrade`` can proceed::
$ alembic upgrade head
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c
.. topic:: merge mechanics
The upgrade process traverses through all of our migration files using
a **topological sorting** algorithm, treating the list of migration
files not as a linked list, but as a **directed acyclic graph**. The starting
points of this traversal are the **current heads** within our database,
and the end point is the "head" revision or revisions specified.
When a migration proceeds across a point at which there are multiple heads,
the ``alembic_version`` table will at that point store *multiple* rows,
one for each head. Our migration process above will emit SQL against
``alembic_version`` along these lines:
.. sourcecode:: sql
-- Running upgrade -> 1975ea83b712, create account table
INSERT INTO alembic_version (version_num) VALUES ('1975ea83b712')
-- Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
UPDATE alembic_version SET version_num='27c6a30d7c24' WHERE alembic_version.version_num = '1975ea83b712'
-- Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INSERT INTO alembic_version (version_num) VALUES ('ae1027a6acf')
-- Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c
DELETE FROM alembic_version WHERE alembic_version.version_num = 'ae1027a6acf'
UPDATE alembic_version SET version_num='53fffde5ad5' WHERE alembic_version.version_num = '27c6a30d7c24'
At the point at which both ``27c6a30d7c24`` and ``ae1027a6acf`` exist within our
database, both values are present in ``alembic_version``, which now has
two rows. If we upgrade to these two versions alone, then stop and
run ``alembic current``, we will see this::
$ alembic current --verbose
Current revision(s) for postgresql://scott:XXXXX@localhost/test:
Rev: ae1027a6acf
Parent: 1975ea83b712
Path: foo/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
Rev: 27c6a30d7c24
Parent: 1975ea83b712
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
A key advantage to the ``merge`` process is that it will
run equally well on databases that were present on version ``ae1027a6acf``
alone, versus databases that were present on version ``27c6a30d7c24`` alone;
whichever version was not yet applied, will be applied before the merge point
can be crossed. This brings forth a way of thinking about a merge file,
as well as about any Alembic revision file. As they are considered to
be "nodes" within a set that is subject to topological sorting, each
"node" is a point that cannot be crossed until all of its dependencies
are satisfied.
Prior to Alembic's support of merge points, the use case of databases
sitting on different heads was basically impossible to reconcile; having
to manually splice the head files together invariably meant that one migration
would occur before the other, thus being incompatible with databases that
were present on the other migration.
Working with Explicit Branches
------------------------------
The ``alembic upgrade`` command hinted at other options besides merging when
dealing with multiple heads. Let's back up and assume we're back where
we have as our heads just ``ae1027a6acf`` and ``27c6a30d7c24``::
$ alembic heads
27c6a30d7c24
ae1027a6acf
Earlier, when we did ``alembic upgrade head``, it gave us an error which
suggested ``please specify a specific target revision, '<branchname>@head' to
narrow to a specific head, or 'heads' for all heads`` in order to proceed
without merging. Let's cover those cases.
Referring to all heads at once
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``heads`` identifier is a lot like ``head``, except it explicitly refers
to *all* heads at once. That is, it's like telling Alembic to do the operation
for both ``ae1027a6acf`` and ``27c6a30d7c24`` simultaneously. If we started
from a fresh database and ran ``upgrade heads`` we'd see::
$ alembic upgrade heads
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
Since we've upgraded to ``heads``, and we do in fact have more than one head,
that means these two distinct heads are now in our ``alembic_version`` table.
We can see this if we run ``alembic current``::
$ alembic current
ae1027a6acf (head)
27c6a30d7c24 (head)
That means there's two rows in ``alembic_version`` right now. If we downgrade
one step at a time, Alembic will **delete** from the ``alembic_version`` table
each branch that's closed out, until only one branch remains; then it will
continue updating the single value down to the previous versions::
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade ae1027a6acf -> 1975ea83b712, add a column
$ alembic current
27c6a30d7c24 (head)
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade 27c6a30d7c24 -> 1975ea83b712, add shopping cart table
$ alembic current
1975ea83b712 (branchpoint)
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade 1975ea83b712 -> , create account table
$ alembic current
Referring to a Specific Version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We can pass a specific version number to ``upgrade``. Alembic will ensure that
all revisions upon which this version depends are invoked, and nothing more.
So if we ``upgrade`` either to ``27c6a30d7c24`` or ``ae1027a6acf`` specifically,
it guarantees that ``1975ea83b712`` will have been applied, but not that
any "sibling" versions are applied::
$ alembic upgrade 27c6a
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
With ``1975ea83b712`` and ``27c6a30d7c24`` applied, ``ae1027a6acf`` is just
a single additional step::
$ alembic upgrade ae102
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
Working with Branch Labels
^^^^^^^^^^^^^^^^^^^^^^^^^^
To satisfy the use case where an environment has long-lived branches, especially
independent branches as will be discussed in the next section, Alembic supports
the concept of **branch labels**. These are string values that are present
within the migration file, using the new identifier ``branch_labels``.
For example, if we want to refer to the "shopping cart" branch using the name
"shoppingcart", we can add that name to our file
``27c6a30d7c24_add_shopping_cart_table.py``::
"""add shopping cart table
"""
# revision identifiers, used by Alembic.
revision = '27c6a30d7c24'
down_revision = '1975ea83b712'
branch_labels = ('shoppingcart',)
# ...
The ``branch_labels`` attribute refers to a string name, or a tuple
of names, which will now apply to this revision, all descendants of this
revision, as well as all ancestors of this revision up until the preceding
branch point, in this case ``1975ea83b712``. We can see the ``shoppingcart``
label applied to this revision::
$ alembic history
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
1975ea83b712 -> ae1027a6acf (head), add a column
<base> -> 1975ea83b712 (branchpoint), create account table
With the label applied, the name ``shoppingcart`` now serves as an alias
for the ``27c6a30d7c24`` revision specifically. We can illustrate this
by showing it with ``alembic show``::
$ alembic show shoppingcart
Rev: 27c6a30d7c24 (head)
Parent: 1975ea83b712
Branch names: shoppingcart
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
However, when using branch labels, we usually want to use them using a syntax
known as "branch at" syntax; this syntax allows us to state that we want to
use a specific revision, let's say a "head" revision, in terms of a *specific*
branch. While normally, we can't refer to ``alembic upgrade head`` when
there's multiple heads, we *can* refer to this head specifcally using
``shoppingcart@head`` syntax::
$ alembic upgrade shoppingcart@head
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
The ``shoppingcart@head`` syntax becomes important to us if we wish to
add new migration files to our versions directory while maintaining multiple
branches. Just like the ``upgrade`` command, if we attempted to add a new
revision file to our multiple-heads layout without a specific parent revision,
we'd get a familiar error::
$ alembic revision -m "add a shopping cart column"
FAILED: Multiple heads are present; please specify the head revision on
which the new revision should be based, or perform a merge.
The ``alembic revision`` command is pretty clear in what we need to do;
to add our new revision specifically to the ``shoppingcart`` branch,
we use the ``--head`` argument, either with the specific revision identifier
``27c6a30d7c24``, or more generically using our branchname ``shoppingcart@head``::
$ alembic revision -m "add a shopping cart column" --head shoppingcart@head
Generating /path/to/foo/versions/d747a8a8879_add_a_shopping_cart_column.py ... done
``alembic history`` shows both files now part of the ``shoppingcart`` branch::
$ alembic history
1975ea83b712 -> ae1027a6acf (head), add a column
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
We can limit our history operation just to this branch as well::
$ alembic history -r shoppingcart:
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
If we want to illustrate the path of ``shoppingcart`` all the way from the
base, we can do that as follows::
$ alembic history -r :shoppingcart@head
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
We can run this operation from the "base" side as well, but we get a different
result::
$ alembic history -r shoppingcart@base:
1975ea83b712 -> ae1027a6acf (head), add a column
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
When we list from ``shoppingcart@base`` without an endpoint, it's really shorthand
for ``-r shoppingcart@base:heads``, e.g. all heads, and since ``shoppingcart@base``
is the same "base" shared by the ``ae1027a6acf`` revision, we get that
revision in our listing as well. The ``<branchname>@base`` syntax can be
useful when we are dealing with individual bases, as we'll see in the next
section.
The ``<branchname>@head`` format can also be used with revision numbers
instead of branch names, though this is less convenient. If we wanted to
add a new revision to our branch that includes the un-labeled ``ae1027a6acf``,
if this weren't a head already, we could ask for the "head of the branch
that includes ``ae1027a6acf``" as follows::
$ alembic revision -m "add another account column" --head ae10@head
Generating /path/to/foo/versions/55af2cb1c267_add_another_account_column.py ... done
More Label Syntaxes
^^^^^^^^^^^^^^^^^^^
The ``heads`` symbol can be combined with a branch label, in the case that
your labeled branch itself breaks off into multiple branches::
$ alembic upgrade shoppingcart@heads
Relative identifiers, as introduced in :ref:`relative_migrations`,
work with labels too. For example, upgrading to ``shoppingcart@+2``
means to upgrade from current heads on "shoppingcart" upwards two revisions::
$ alembic upgrade shoppingcart@+2
This kind of thing works from history as well::
$ alembic history -r current:shoppingcart@+2
The newer ``relnum+delta`` format can be combined as well, for example
if we wanted to list along ``shoppingcart`` up until two revisions
before the head::
$ alembic history -r :shoppingcart@head-2
.. _multiple_bases:
Working with Multiple Bases
---------------------------
.. note:: The multiple base feature is intended to allow for multiple Alembic
versioning lineages which **share the same alembic_version table**. This is
so that individual revisions within the lineages can have cross-dependencies
on each other. For the simpler case where one project has multiple,
**completely independent** revision lineages that refer to **separate**
alembic_version tables, see the example in :ref:`multiple_environments`.
We've seen in the previous section that ``alembic upgrade`` is fine
if we have multiple heads, ``alembic revision`` allows us to tell it which
"head" we'd like to associate our new revision file with, and branch labels
allow us to assign names to branches that we can use in subsequent commands.
Let's put all these together and refer to a new "base", that is, a whole
new tree of revision files that will be semi-independent of the account/shopping
cart revisions we've been working with. This new tree will deal with
database tables involving "networking".
.. _multiple_version_directories:
Setting up Multiple Version Directories
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
While optional, it is often the case that when working with multiple bases,
we'd like different sets of version files to exist within their own directories;
typically, if an application is organized into several sub-modules, each
one would have a version directory containing migrations pertinent to
that module. So to start out, we can edit ``alembic.ini`` to refer
to multiple directories; we'll also state the current ``versions``
directory as one of them::
# version location specification; this defaults
# to foo/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
version_path_separator = space
version_locations = %(here)s/model/networking %(here)s/alembic/versions
The new directory ``%(here)s/model/networking`` is in terms of where
the ``alembic.ini`` file is, as we are using the symbol ``%(here)s`` which
resolves to this location. When we create our first new revision
targeted at this directory,
``model/networking`` will be created automatically if it does not
exist yet. Once we've created a revision here, the path is used automatically
when generating subsequent revision files that refer to this revision tree.
Creating a Labeled Base Revision
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We also want our new branch to have its own name, and for that we want to
apply a branch label to the base. In order to achieve this using the
``alembic revision`` command without editing, we need to ensure our
``script.py.mako`` file, used
for generating new revision files, has the appropriate substitutions present.
If Alembic version 0.7.0 or greater was used to generate the original
migration environment, this is already done. However when working with an older
environment, ``script.py.mako`` needs to have this directive added, typically
underneath the ``down_revision`` directive::
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
# add this here in order to use revision with branch_label
branch_labels = ${repr(branch_labels)}
With this in place, we can create a new revision file, starting up a branch
that will deal with database tables involving networking; we specify the
``--head`` version of ``base``, a ``--branch-label`` of ``networking``,
and the directory we want this first revision file to be
placed in with ``--version-path``::
$ alembic revision -m "create networking branch" --head=base --branch-label=networking --version-path=model/networking
Creating directory /path/to/foo/model/networking ... done
Generating /path/to/foo/model/networking/3cac04ae8714_create_networking_branch.py ... done
If we ran the above command and we didn't have the newer ``script.py.mako``
directive, we'd get this error::
FAILED: Version 3cac04ae8714 specified branch_labels networking, however
the migration file foo/model/networking/3cac04ae8714_create_networking_branch.py
does not have them; have you upgraded your script.py.mako to include the 'branch_labels'
section?
When we receive the above error, and we would like to try again, we need to
either **delete** the incorrectly generated file in order to run ``revision``
again, *or* we can edit the ``3cac04ae8714_create_networking_branch.py``
directly to add the ``branch_labels`` in of our choosing.
Running with Multiple Bases
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once we have a new, permanent (for as long as we desire it to be)
base in our system, we'll always have multiple heads present::
$ alembic heads
3cac04ae8714 (networking) (head)
27c6a30d7c24 (shoppingcart) (head)
ae1027a6acf (head)
When we want to add a new revision file to ``networking``, we specify
``networking@head`` as the ``--head``. The appropriate version directory
is now selected automatically based on the head we choose::
$ alembic revision -m "add ip number table" --head=networking@head
Generating /path/to/foo/model/networking/109ec7d132bf_add_ip_number_table.py ... done
It's important that we refer to the head using ``networking@head``; if we
only refer to ``networking``, that refers to only ``3cac04ae8714`` specifically;
if we specify this and it's not a head, ``alembic revision`` will make sure
we didn't mean to specify the head::
$ alembic revision -m "add DNS table" --head=networking
FAILED: Revision 3cac04ae8714 is not a head revision; please
specify --splice to create a new branch from this revision
As mentioned earlier, as this base is independent, we can view its history
from the base using ``history -r networking@base:``::
$ alembic history -r networking@base:
109ec7d132bf -> 29f859a13ea (networking) (head), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
At the moment, this is the same output we'd get at this point if we used
``-r :networking@head``. However, that will change later on as we use
additional directives.
We may now run upgrades or downgrades freely, among individual branches
(let's assume a clean database again)::
$ alembic upgrade networking@head
INFO [alembic.migration] Running upgrade -> 3cac04ae8714, create networking branch
INFO [alembic.migration] Running upgrade 3cac04ae8714 -> 109ec7d132bf, add ip number table
INFO [alembic.migration] Running upgrade 109ec7d132bf -> 29f859a13ea, add DNS table
or against the whole thing using ``heads``::
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
INFO [alembic.migration] Running upgrade 27c6a30d7c24 -> d747a8a8879, add a shopping cart column
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade ae1027a6acf -> 55af2cb1c267, add another account column
Branch Dependencies
-------------------
When working with multiple roots, it is expected that these different
revision streams will need to refer to one another. For example, a new
revision in ``networking`` which needs to refer to the ``account``
table will want to establish ``55af2cb1c267, add another account column``,
the last revision that
works with the account table, as a dependency. From a graph perspective,
this means nothing more that the new file will feature both
``55af2cb1c267, add another account column`` and ``29f859a13ea, add DNS table`` as "down" revisions,
and looks just as though we had merged these two branches together. However,
we don't want to consider these as "merged"; we want the two revision
streams to *remain independent*, even though a version in ``networking``
is going to reach over into the other stream. To support this use case,
Alembic provides a directive known as ``depends_on``, which allows
a revision file to refer to another as a "dependency", very similar to
an entry in ``down_revision`` from a graph perspective, but different
from a semantic perspective.
To use ``depends_on``, we can specify it as part of our ``alembic revision``
command::
$ alembic revision -m "add ip account table" --head=networking@head --depends-on=55af2cb1c267
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
Within our migration file, we'll see this new directive present::
# revision identifiers, used by Alembic.
revision = '2a95102259be'
down_revision = '29f859a13ea'
branch_labels = None
depends_on='55af2cb1c267'
``depends_on`` may be either a real revision number or a branch
name. When specified at the command line, a resolution from a
partial revision number will work as well. It can refer
to any number of dependent revisions as well; for example, if we were
to run the command::
$ alembic revision -m "add ip account table" \\
--head=networking@head \\
--depends-on=55af2cb1c267 --depends-on=d747a --depends-on=fa445
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
We'd see inside the file::
# revision identifiers, used by Alembic.
revision = '2a95102259be'
down_revision = '29f859a13ea'
branch_labels = None
depends_on = ('55af2cb1c267', 'd747a8a8879', 'fa4456a9201')
We also can of course add or alter this value within the file manually after
it is generated, rather than using the ``--depends-on`` argument.
We can see the effect this directive has when we view the history
of the ``networking`` branch in terms of "heads", e.g., all the revisions that
are descendants::
$ alembic history -r :networking@head
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
ae1027a6acf -> 55af2cb1c267 (effective head), add another account column
1975ea83b712 -> ae1027a6acf, Add a column
<base> -> 1975ea83b712 (branchpoint), create account table
What we see is that the full history of the ``networking`` branch, in terms
of an "upgrade" to the "head", will include that the tree building
up ``55af2cb1c267, add another account column``
will be pulled in first. Interstingly, we don't see this displayed
when we display history in the other direction, e.g. from ``networking@base``::
$ alembic history -r networking@base:
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
The reason for the discrepancy is that displaying history from the base
shows us what would occur if we ran a downgrade operation, instead of an
upgrade. If we downgraded all the files in ``networking`` using
``networking@base``, the dependencies aren't affected, they're left in place.
We also see something odd if we view ``heads`` at the moment::
$ alembic heads
2a95102259be (networking) (head)
27c6a30d7c24 (shoppingcart) (head)
55af2cb1c267 (effective head)
The head file that we used as a "dependency", ``55af2cb1c267``, is displayed
as an "effective" head, which we can see also in the history display earlier.
What this means is that at the moment, if we were to upgrade all versions
to the top, the ``55af2cb1c267`` revision number would not actually be
present in the ``alembic_version`` table; this is because it does not have
a branch of its own subsequent to the ``2a95102259be`` revision which depends
on it::
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade 29f859a13ea, 55af2cb1c267 -> 2a95102259be, add ip account table
$ alembic current
2a95102259be (head)
27c6a30d7c24 (head)
The entry is still displayed in ``alembic heads`` because Alembic knows that
even though this revision isn't a "real" head, it's still something that
we developers consider semantically to be a head, so it's displayed, noting
its special status so that we don't get quite as confused when we don't
see it within ``alembic current``.
If we add a new revision onto ``55af2cb1c267``, the branch again becomes
a "real" branch which can have its own entry in the database::
$ alembic revision -m "more account changes" --head=55af2cb@head
Generating /path/to/foo/versions/34e094ad6ef1_more_account_changes.py ... done
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade 55af2cb1c267 -> 34e094ad6ef1, more account changes
$ alembic current
2a95102259be (head)
27c6a30d7c24 (head)
34e094ad6ef1 (head)
For posterity, the revision tree now looks like::
$ alembic history
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
55af2cb1c267 -> 34e094ad6ef1 (head), more account changes
ae1027a6acf -> 55af2cb1c267, add another account column
1975ea83b712 -> ae1027a6acf, Add a column
<base> -> 1975ea83b712 (branchpoint), create account table
--- 27c6 --> d747 --> <head>
/ (shoppingcart)
<base> --> 1975 -->
\
--- ae10 --> 55af --> <head>
^
+--------+ (dependency)
|
|
<base> --> 3782 -----> 109e ----> 29f8 ---> 2a95 --> <head>
(networking)
If there's any point to be made here, it's if you are too freely branching, merging
and labeling, things can get pretty crazy! Hence the branching system should
be used carefully and thoughtfully for best results.
| .. _branches:
Working with Branches
=====================
A **branch** describes a point in a migration stream when two or more
versions refer to the same parent migration as their anscestor. Branches
occur naturally when two divergent source trees, both containing Alembic
revision files created independently within those source trees, are merged
together into one. When this occurs, the challenge of a branch is to **merge** the
branches into a single series of changes, so that databases established
from either source tree individually can be upgraded to reference the merged
result equally. Another scenario where branches are present are when we create them
directly; either at some point in the migration stream we'd like different
series of migrations to be managed independently (e.g. we create a tree),
or we'd like separate migration streams for different features starting
at the root (e.g. a *forest*). We'll illustrate all of these cases, starting
with the most common which is a source-merge-originated branch that we'll
merge.
Starting with the "account table" example we began in :ref:`create_migration`,
assume we have our basemost version ``1975ea83b712``, which leads into
the second revision ``ae1027a6acf``, and the migration files for these
two revisions are checked into our source repository.
Consider if we merged into our source repository another code branch which contained
a revision for another table called ``shopping_cart``. This revision was made
against our first Alembic revision, the one that generated ``account``. After
loading the second source tree in, a new file
``27c6a30d7c24_add_shopping_cart_table.py`` exists within our ``versions`` directory.
Both it, as well as ``ae1027a6acf_add_a_column.py``, reference
``1975ea83b712_add_account_table.py`` as the "downgrade" revision. To illustrate::
# main source tree:
1975ea83b712 (create account table) -> ae1027a6acf (add a column)
# branched source tree
1975ea83b712 (create account table) -> 27c6a30d7c24 (add shopping cart table)
Above, we can see ``1975ea83b712`` is our **branch point**; two distinct versions
both refer to it as its parent. The Alembic command ``branches`` illustrates
this fact::
$ alembic branches --verbose
Rev: 1975ea83b712 (branchpoint)
Parent: <base>
Branches into: 27c6a30d7c24, ae1027a6acf
Path: foo/versions/1975ea83b712_add_account_table.py
create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2014-11-20 13:02:46.257104
-> 27c6a30d7c24 (head), add shopping cart table
-> ae1027a6acf (head), add a column
History shows it too, illustrating two ``head`` entries as well
as a ``branchpoint``::
$ alembic history
1975ea83b712 -> 27c6a30d7c24 (head), add shopping cart table
1975ea83b712 -> ae1027a6acf (head), add a column
<base> -> 1975ea83b712 (branchpoint), create account table
We can get a view of just the current heads using ``alembic heads``::
$ alembic heads --verbose
Rev: 27c6a30d7c24 (head)
Parent: 1975ea83b712
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
Rev: ae1027a6acf (head)
Parent: 1975ea83b712
Path: foo/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
If we try to run an ``upgrade`` to the usual end target of ``head``, Alembic no
longer considers this to be an unambiguous command. As we have more than
one ``head``, the ``upgrade`` command wants us to provide more information::
$ alembic upgrade head
FAILED: Multiple head revisions are present for given argument 'head'; please specify a specific
target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads
The ``upgrade`` command gives us quite a few options in which we can proceed
with our upgrade, either giving it information on *which* head we'd like to upgrade
towards, or alternatively stating that we'd like *all* heads to be upgraded
towards at once. However, in the typical case of two source trees being
merged, we will want to pursue a third option, which is that we can **merge** these
branches.
Merging Branches
----------------
An Alembic merge is a migration file that joins two or
more "head" files together. If the two branches we have right now can
be said to be a "tree" structure, introducing this merge file will
turn it into a "diamond" structure::
-- ae1027a6acf -->
/ \
<base> --> 1975ea83b712 --> --> mergepoint
\ /
-- 27c6a30d7c24 -->
We create the merge file using ``alembic merge``; with this command, we can
pass to it an argument such as ``heads``, meaning we'd like to merge all
heads. Or, we can pass it individual revision numbers sequentally::
$ alembic merge -m "merge ae1 and 27c" ae1027 27c6a
Generating /path/to/foo/versions/53fffde5ad5_merge_ae1_and_27c.py ... done
Looking inside the new file, we see it as a regular migration file, with
the only new twist is that ``down_revision`` points to both revisions::
"""merge ae1 and 27c
Revision ID: 53fffde5ad5
Revises: ae1027a6acf, 27c6a30d7c24
Create Date: 2014-11-20 13:31:50.811663
"""
# revision identifiers, used by Alembic.
revision = '53fffde5ad5'
down_revision = ('ae1027a6acf', '27c6a30d7c24')
branch_labels = None
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass
This file is a regular migration file, and if we wish to, we may place
:class:`.Operations` directives into the ``upgrade()`` and ``downgrade()``
functions like any other migration file. Though it is probably best to limit
the instructions placed here only to those that deal with any kind of
reconciliation that is needed between the two merged branches, if any.
The ``heads`` command now illustrates that the multiple heads in our
``versions/`` directory have been resolved into our new head::
$ alembic heads --verbose
Rev: 53fffde5ad5 (head) (mergepoint)
Merges: ae1027a6acf, 27c6a30d7c24
Path: foo/versions/53fffde5ad5_merge_ae1_and_27c.py
merge ae1 and 27c
Revision ID: 53fffde5ad5
Revises: ae1027a6acf, 27c6a30d7c24
Create Date: 2014-11-20 13:31:50.811663
History shows a similar result, as the mergepoint becomes our head::
$ alembic history
ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5 (head) (mergepoint), merge ae1 and 27c
1975ea83b712 -> ae1027a6acf, add a column
1975ea83b712 -> 27c6a30d7c24, add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
With a single ``head`` target, a generic ``upgrade`` can proceed::
$ alembic upgrade head
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c
.. topic:: merge mechanics
The upgrade process traverses through all of our migration files using
a **topological sorting** algorithm, treating the list of migration
files not as a linked list, but as a **directed acyclic graph**. The starting
points of this traversal are the **current heads** within our database,
and the end point is the "head" revision or revisions specified.
When a migration proceeds across a point at which there are multiple heads,
the ``alembic_version`` table will at that point store *multiple* rows,
one for each head. Our migration process above will emit SQL against
``alembic_version`` along these lines:
.. sourcecode:: sql
-- Running upgrade -> 1975ea83b712, create account table
INSERT INTO alembic_version (version_num) VALUES ('1975ea83b712')
-- Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
UPDATE alembic_version SET version_num='27c6a30d7c24' WHERE alembic_version.version_num = '1975ea83b712'
-- Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INSERT INTO alembic_version (version_num) VALUES ('ae1027a6acf')
-- Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c
DELETE FROM alembic_version WHERE alembic_version.version_num = 'ae1027a6acf'
UPDATE alembic_version SET version_num='53fffde5ad5' WHERE alembic_version.version_num = '27c6a30d7c24'
At the point at which both ``27c6a30d7c24`` and ``ae1027a6acf`` exist within our
database, both values are present in ``alembic_version``, which now has
two rows. If we upgrade to these two versions alone, then stop and
run ``alembic current``, we will see this::
$ alembic current --verbose
Current revision(s) for postgresql://scott:XXXXX@localhost/test:
Rev: ae1027a6acf
Parent: 1975ea83b712
Path: foo/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
Rev: 27c6a30d7c24
Parent: 1975ea83b712
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
A key advantage to the ``merge`` process is that it will
run equally well on databases that were present on version ``ae1027a6acf``
alone, versus databases that were present on version ``27c6a30d7c24`` alone;
whichever version was not yet applied, will be applied before the merge point
can be crossed. This brings forth a way of thinking about a merge file,
as well as about any Alembic revision file. As they are considered to
be "nodes" within a set that is subject to topological sorting, each
"node" is a point that cannot be crossed until all of its dependencies
are satisfied.
Prior to Alembic's support of merge points, the use case of databases
sitting on different heads was basically impossible to reconcile; having
to manually splice the head files together invariably meant that one migration
would occur before the other, thus being incompatible with databases that
were present on the other migration.
Working with Explicit Branches
------------------------------
The ``alembic upgrade`` command hinted at other options besides merging when
dealing with multiple heads. Let's back up and assume we're back where
we have as our heads just ``ae1027a6acf`` and ``27c6a30d7c24``::
$ alembic heads
27c6a30d7c24
ae1027a6acf
Earlier, when we did ``alembic upgrade head``, it gave us an error which
suggested ``please specify a specific target revision, '<branchname>@head' to
narrow to a specific head, or 'heads' for all heads`` in order to proceed
without merging. Let's cover those cases.
Referring to all heads at once
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``heads`` identifier is a lot like ``head``, except it explicitly refers
to *all* heads at once. That is, it's like telling Alembic to do the operation
for both ``ae1027a6acf`` and ``27c6a30d7c24`` simultaneously. If we started
from a fresh database and ran ``upgrade heads`` we'd see::
$ alembic upgrade heads
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
Since we've upgraded to ``heads``, and we do in fact have more than one head,
that means these two distinct heads are now in our ``alembic_version`` table.
We can see this if we run ``alembic current``::
$ alembic current
ae1027a6acf (head)
27c6a30d7c24 (head)
That means there's two rows in ``alembic_version`` right now. If we downgrade
one step at a time, Alembic will **delete** from the ``alembic_version`` table
each branch that's closed out, until only one branch remains; then it will
continue updating the single value down to the previous versions::
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade ae1027a6acf -> 1975ea83b712, add a column
$ alembic current
27c6a30d7c24 (head)
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade 27c6a30d7c24 -> 1975ea83b712, add shopping cart table
$ alembic current
1975ea83b712 (branchpoint)
$ alembic downgrade -1
INFO [alembic.migration] Running downgrade 1975ea83b712 -> , create account table
$ alembic current
Referring to a Specific Version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We can pass a specific version number to ``upgrade``. Alembic will ensure that
all revisions upon which this version depends are invoked, and nothing more.
So if we ``upgrade`` either to ``27c6a30d7c24`` or ``ae1027a6acf`` specifically,
it guarantees that ``1975ea83b712`` will have been applied, but not that
any "sibling" versions are applied::
$ alembic upgrade 27c6a
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
With ``1975ea83b712`` and ``27c6a30d7c24`` applied, ``ae1027a6acf`` is just
a single additional step::
$ alembic upgrade ae102
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
Working with Branch Labels
^^^^^^^^^^^^^^^^^^^^^^^^^^
To satisfy the use case where an environment has long-lived branches, especially
independent branches as will be discussed in the next section, Alembic supports
the concept of **branch labels**. These are string values that are present
within the migration file, using the new identifier ``branch_labels``.
For example, if we want to refer to the "shopping cart" branch using the name
"shoppingcart", we can add that name to our file
``27c6a30d7c24_add_shopping_cart_table.py``::
"""add shopping cart table
"""
# revision identifiers, used by Alembic.
revision = '27c6a30d7c24'
down_revision = '1975ea83b712'
branch_labels = ('shoppingcart',)
# ...
The ``branch_labels`` attribute refers to a string name, or a tuple
of names, which will now apply to this revision, all descendants of this
revision, as well as all ancestors of this revision up until the preceding
branch point, in this case ``1975ea83b712``. We can see the ``shoppingcart``
label applied to this revision::
$ alembic history
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
1975ea83b712 -> ae1027a6acf (head), add a column
<base> -> 1975ea83b712 (branchpoint), create account table
With the label applied, the name ``shoppingcart`` now serves as an alias
for the ``27c6a30d7c24`` revision specifically. We can illustrate this
by showing it with ``alembic show``::
$ alembic show shoppingcart
Rev: 27c6a30d7c24 (head)
Parent: 1975ea83b712
Branch names: shoppingcart
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
add shopping cart table
Revision ID: 27c6a30d7c24
Revises: 1975ea83b712
Create Date: 2014-11-20 13:03:11.436407
However, when using branch labels, we usually want to use them using a syntax
known as "branch at" syntax; this syntax allows us to state that we want to
use a specific revision, let's say a "head" revision, in terms of a *specific*
branch. While normally, we can't refer to ``alembic upgrade head`` when
there's multiple heads, we *can* refer to this head specifcally using
``shoppingcart@head`` syntax::
$ alembic upgrade shoppingcart@head
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
The ``shoppingcart@head`` syntax becomes important to us if we wish to
add new migration files to our versions directory while maintaining multiple
branches. Just like the ``upgrade`` command, if we attempted to add a new
revision file to our multiple-heads layout without a specific parent revision,
we'd get a familiar error::
$ alembic revision -m "add a shopping cart column"
FAILED: Multiple heads are present; please specify the head revision on
which the new revision should be based, or perform a merge.
The ``alembic revision`` command is pretty clear in what we need to do;
to add our new revision specifically to the ``shoppingcart`` branch,
we use the ``--head`` argument, either with the specific revision identifier
``27c6a30d7c24``, or more generically using our branchname ``shoppingcart@head``::
$ alembic revision -m "add a shopping cart column" --head shoppingcart@head
Generating /path/to/foo/versions/d747a8a8879_add_a_shopping_cart_column.py ... done
``alembic history`` shows both files now part of the ``shoppingcart`` branch::
$ alembic history
1975ea83b712 -> ae1027a6acf (head), add a column
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
We can limit our history operation just to this branch as well::
$ alembic history -r shoppingcart:
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
If we want to illustrate the path of ``shoppingcart`` all the way from the
base, we can do that as follows::
$ alembic history -r :shoppingcart@head
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
We can run this operation from the "base" side as well, but we get a different
result::
$ alembic history -r shoppingcart@base:
1975ea83b712 -> ae1027a6acf (head), add a column
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
<base> -> 1975ea83b712 (branchpoint), create account table
When we list from ``shoppingcart@base`` without an endpoint, it's really shorthand
for ``-r shoppingcart@base:heads``, e.g. all heads, and since ``shoppingcart@base``
is the same "base" shared by the ``ae1027a6acf`` revision, we get that
revision in our listing as well. The ``<branchname>@base`` syntax can be
useful when we are dealing with individual bases, as we'll see in the next
section.
The ``<branchname>@head`` format can also be used with revision numbers
instead of branch names, though this is less convenient. If we wanted to
add a new revision to our branch that includes the un-labeled ``ae1027a6acf``,
if this weren't a head already, we could ask for the "head of the branch
that includes ``ae1027a6acf``" as follows::
$ alembic revision -m "add another account column" --head ae10@head
Generating /path/to/foo/versions/55af2cb1c267_add_another_account_column.py ... done
More Label Syntaxes
^^^^^^^^^^^^^^^^^^^
The ``heads`` symbol can be combined with a branch label, in the case that
your labeled branch itself breaks off into multiple branches::
$ alembic upgrade shoppingcart@heads
Relative identifiers, as introduced in :ref:`relative_migrations`,
work with labels too. For example, upgrading to ``shoppingcart@+2``
means to upgrade from current heads on "shoppingcart" upwards two revisions::
$ alembic upgrade shoppingcart@+2
This kind of thing works from history as well::
$ alembic history -r current:shoppingcart@+2
The newer ``relnum+delta`` format can be combined as well, for example
if we wanted to list along ``shoppingcart`` up until two revisions
before the head::
$ alembic history -r :shoppingcart@head-2
.. _multiple_bases:
Working with Multiple Bases
---------------------------
.. note:: The multiple base feature is intended to allow for multiple Alembic
versioning lineages which **share the same alembic_version table**. This is
so that individual revisions within the lineages can have cross-dependencies
on each other. For the simpler case where one project has multiple,
**completely independent** revision lineages that refer to **separate**
alembic_version tables, see the example in :ref:`multiple_environments`.
We've seen in the previous section that ``alembic upgrade`` is fine
if we have multiple heads, ``alembic revision`` allows us to tell it which
"head" we'd like to associate our new revision file with, and branch labels
allow us to assign names to branches that we can use in subsequent commands.
Let's put all these together and refer to a new "base", that is, a whole
new tree of revision files that will be semi-independent of the account/shopping
cart revisions we've been working with. This new tree will deal with
database tables involving "networking".
.. _multiple_version_directories:
Setting up Multiple Version Directories
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
While optional, it is often the case that when working with multiple bases,
we'd like different sets of version files to exist within their own directories;
typically, if an application is organized into several sub-modules, each
one would have a version directory containing migrations pertinent to
that module. So to start out, we can edit ``alembic.ini`` to refer
to multiple directories; we'll also state the current ``versions``
directory as one of them::
# A separator for the location paths must be defined first.
version_path_separator = os # Use os.pathsep.
# version location specification; this defaults
# to foo/versions. When using multiple version
# directories, initial revisions must be specified with --version-path
version_locations = %(here)s/model/networking:%(here)s/alembic/versions
The new directory ``%(here)s/model/networking`` is in terms of where
the ``alembic.ini`` file is, as we are using the symbol ``%(here)s`` which
resolves to this location. When we create our first new revision
targeted at this directory,
``model/networking`` will be created automatically if it does not
exist yet. Once we've created a revision here, the path is used automatically
when generating subsequent revision files that refer to this revision tree.
Creating a Labeled Base Revision
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We also want our new branch to have its own name, and for that we want to
apply a branch label to the base. In order to achieve this using the
``alembic revision`` command without editing, we need to ensure our
``script.py.mako`` file, used
for generating new revision files, has the appropriate substitutions present.
If Alembic version 0.7.0 or greater was used to generate the original
migration environment, this is already done. However when working with an older
environment, ``script.py.mako`` needs to have this directive added, typically
underneath the ``down_revision`` directive::
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
# add this here in order to use revision with branch_label
branch_labels = ${repr(branch_labels)}
With this in place, we can create a new revision file, starting up a branch
that will deal with database tables involving networking; we specify the
``--head`` version of ``base``, a ``--branch-label`` of ``networking``,
and the directory we want this first revision file to be
placed in with ``--version-path``::
$ alembic revision -m "create networking branch" --head=base --branch-label=networking --version-path=model/networking
Creating directory /path/to/foo/model/networking ... done
Generating /path/to/foo/model/networking/3cac04ae8714_create_networking_branch.py ... done
If we ran the above command and we didn't have the newer ``script.py.mako``
directive, we'd get this error::
FAILED: Version 3cac04ae8714 specified branch_labels networking, however
the migration file foo/model/networking/3cac04ae8714_create_networking_branch.py
does not have them; have you upgraded your script.py.mako to include the 'branch_labels'
section?
When we receive the above error, and we would like to try again, we need to
either **delete** the incorrectly generated file in order to run ``revision``
again, *or* we can edit the ``3cac04ae8714_create_networking_branch.py``
directly to add the ``branch_labels`` in of our choosing.
Running with Multiple Bases
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once we have a new, permanent (for as long as we desire it to be)
base in our system, we'll always have multiple heads present::
$ alembic heads
3cac04ae8714 (networking) (head)
27c6a30d7c24 (shoppingcart) (head)
ae1027a6acf (head)
When we want to add a new revision file to ``networking``, we specify
``networking@head`` as the ``--head``. The appropriate version directory
is now selected automatically based on the head we choose::
$ alembic revision -m "add ip number table" --head=networking@head
Generating /path/to/foo/model/networking/109ec7d132bf_add_ip_number_table.py ... done
It's important that we refer to the head using ``networking@head``; if we
only refer to ``networking``, that refers to only ``3cac04ae8714`` specifically;
if we specify this and it's not a head, ``alembic revision`` will make sure
we didn't mean to specify the head::
$ alembic revision -m "add DNS table" --head=networking
FAILED: Revision 3cac04ae8714 is not a head revision; please
specify --splice to create a new branch from this revision
As mentioned earlier, as this base is independent, we can view its history
from the base using ``history -r networking@base:``::
$ alembic history -r networking@base:
109ec7d132bf -> 29f859a13ea (networking) (head), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
At the moment, this is the same output we'd get at this point if we used
``-r :networking@head``. However, that will change later on as we use
additional directives.
We may now run upgrades or downgrades freely, among individual branches
(let's assume a clean database again)::
$ alembic upgrade networking@head
INFO [alembic.migration] Running upgrade -> 3cac04ae8714, create networking branch
INFO [alembic.migration] Running upgrade 3cac04ae8714 -> 109ec7d132bf, add ip number table
INFO [alembic.migration] Running upgrade 109ec7d132bf -> 29f859a13ea, add DNS table
or against the whole thing using ``heads``::
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
INFO [alembic.migration] Running upgrade 27c6a30d7c24 -> d747a8a8879, add a shopping cart column
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
INFO [alembic.migration] Running upgrade ae1027a6acf -> 55af2cb1c267, add another account column
Branch Dependencies
-------------------
When working with multiple roots, it is expected that these different
revision streams will need to refer to one another. For example, a new
revision in ``networking`` which needs to refer to the ``account``
table will want to establish ``55af2cb1c267, add another account column``,
the last revision that
works with the account table, as a dependency. From a graph perspective,
this means nothing more that the new file will feature both
``55af2cb1c267, add another account column`` and ``29f859a13ea, add DNS table`` as "down" revisions,
and looks just as though we had merged these two branches together. However,
we don't want to consider these as "merged"; we want the two revision
streams to *remain independent*, even though a version in ``networking``
is going to reach over into the other stream. To support this use case,
Alembic provides a directive known as ``depends_on``, which allows
a revision file to refer to another as a "dependency", very similar to
an entry in ``down_revision`` from a graph perspective, but different
from a semantic perspective.
To use ``depends_on``, we can specify it as part of our ``alembic revision``
command::
$ alembic revision -m "add ip account table" --head=networking@head --depends-on=55af2cb1c267
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
Within our migration file, we'll see this new directive present::
# revision identifiers, used by Alembic.
revision = '2a95102259be'
down_revision = '29f859a13ea'
branch_labels = None
depends_on='55af2cb1c267'
``depends_on`` may be either a real revision number or a branch
name. When specified at the command line, a resolution from a
partial revision number will work as well. It can refer
to any number of dependent revisions as well; for example, if we were
to run the command::
$ alembic revision -m "add ip account table" \\
--head=networking@head \\
--depends-on=55af2cb1c267 --depends-on=d747a --depends-on=fa445
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
We'd see inside the file::
# revision identifiers, used by Alembic.
revision = '2a95102259be'
down_revision = '29f859a13ea'
branch_labels = None
depends_on = ('55af2cb1c267', 'd747a8a8879', 'fa4456a9201')
We also can of course add or alter this value within the file manually after
it is generated, rather than using the ``--depends-on`` argument.
We can see the effect this directive has when we view the history
of the ``networking`` branch in terms of "heads", e.g., all the revisions that
are descendants::
$ alembic history -r :networking@head
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
ae1027a6acf -> 55af2cb1c267 (effective head), add another account column
1975ea83b712 -> ae1027a6acf, Add a column
<base> -> 1975ea83b712 (branchpoint), create account table
What we see is that the full history of the ``networking`` branch, in terms
of an "upgrade" to the "head", will include that the tree building
up ``55af2cb1c267, add another account column``
will be pulled in first. Interstingly, we don't see this displayed
when we display history in the other direction, e.g. from ``networking@base``::
$ alembic history -r networking@base:
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
The reason for the discrepancy is that displaying history from the base
shows us what would occur if we ran a downgrade operation, instead of an
upgrade. If we downgraded all the files in ``networking`` using
``networking@base``, the dependencies aren't affected, they're left in place.
We also see something odd if we view ``heads`` at the moment::
$ alembic heads
2a95102259be (networking) (head)
27c6a30d7c24 (shoppingcart) (head)
55af2cb1c267 (effective head)
The head file that we used as a "dependency", ``55af2cb1c267``, is displayed
as an "effective" head, which we can see also in the history display earlier.
What this means is that at the moment, if we were to upgrade all versions
to the top, the ``55af2cb1c267`` revision number would not actually be
present in the ``alembic_version`` table; this is because it does not have
a branch of its own subsequent to the ``2a95102259be`` revision which depends
on it::
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade 29f859a13ea, 55af2cb1c267 -> 2a95102259be, add ip account table
$ alembic current
2a95102259be (head)
27c6a30d7c24 (head)
The entry is still displayed in ``alembic heads`` because Alembic knows that
even though this revision isn't a "real" head, it's still something that
we developers consider semantically to be a head, so it's displayed, noting
its special status so that we don't get quite as confused when we don't
see it within ``alembic current``.
If we add a new revision onto ``55af2cb1c267``, the branch again becomes
a "real" branch which can have its own entry in the database::
$ alembic revision -m "more account changes" --head=55af2cb@head
Generating /path/to/foo/versions/34e094ad6ef1_more_account_changes.py ... done
$ alembic upgrade heads
INFO [alembic.migration] Running upgrade 55af2cb1c267 -> 34e094ad6ef1, more account changes
$ alembic current
2a95102259be (head)
27c6a30d7c24 (head)
34e094ad6ef1 (head)
For posterity, the revision tree now looks like::
$ alembic history
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
109ec7d132bf -> 29f859a13ea (networking), add DNS table
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
<base> -> 3cac04ae8714 (networking), create networking branch
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
55af2cb1c267 -> 34e094ad6ef1 (head), more account changes
ae1027a6acf -> 55af2cb1c267, add another account column
1975ea83b712 -> ae1027a6acf, Add a column
<base> -> 1975ea83b712 (branchpoint), create account table
--- 27c6 --> d747 --> <head>
/ (shoppingcart)
<base> --> 1975 -->
\
--- ae10 --> 55af --> <head>
^
+--------+ (dependency)
|
|
<base> --> 3782 -----> 109e ----> 29f8 ---> 2a95 --> <head>
(networking)
If there's any point to be made here, it's if you are too freely branching, merging
and labeling, things can get pretty crazy! Hence the branching system should
be used carefully and thoughtfully for best results.
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | nice | zzzeek | 21 |
sqlalchemy/alembic | 969 | Tweak docs about version_path_separator | I tried to improve docs regarding `version_path_separator` based on my experience with it.
### Checklist
<!-- go over following points. check them with an `x` if they do apply, (they turn into clickable checkboxes once the PR is submitted, so no need to do everything at once)
-->
This pull request is:
- [x] A documentation / typographical error fix
- Good to go, no issue or tests are needed
- [ ] A short code fix
- please include the issue number, and create an issue if none exists, which
must include a complete example of the issue. one line code fixes without an
issue and demonstration will not be accepted.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests. one line code fixes without tests will not be accepted.
- [ ] A new feature implementation
- please include the issue number, and create an issue if none exists, which must
include a complete example of how the feature would look.
- Please include: `Fixes: #<issue number>` in the commit message
- please include tests.
**Have a nice day!**
| null | 2021-11-18 09:30:51+00:00 | 2021-11-23 16:23:10+00:00 | docs/build/tutorial.rst | ========
Tutorial
========
Alembic provides for the creation, management, and invocation of *change management*
scripts for a relational database, using SQLAlchemy as the underlying engine.
This tutorial will provide a full introduction to the theory and usage of this tool.
To begin, make sure Alembic is installed as described at :ref:`installation`.
As stated in the linked document, it is usually preferable that Alembic is
installed in the **same module / Python path as that of the target project**,
usually using a `Python virtual environment
<https://docs.python.org/3/tutorial/venv.html>`_, so that when the ``alembic``
command is run, the Python script which is invoked by ``alembic``, namely your
project's ``env.py`` script, will have access to your application's models.
This is not strictly necessary in all cases, however in the vast majority of
cases is usually preferred.
The tutorial below assumes the ``alembic`` command line utility is present in
the local path and when invoked, will have access to the same Python module
environment as that of the target project.
The Migration Environment
==========================
Usage of Alembic starts with creation of the *Migration Environment*. This is a directory of scripts
that is specific to a particular application. The migration environment is created just once,
and is then maintained along with the application's source code itself. The environment is
created using the ``init`` command of Alembic, and is then customizable to suit the specific
needs of the application.
The structure of this environment, including some generated migration scripts, looks like::
yourproject/
alembic/
env.py
README
script.py.mako
versions/
3512b954651e_add_account.py
2b1ae634e5cd_add_order_id.py
3adcc9a56557_rename_username_field.py
The directory includes these directories/files:
* ``yourproject`` - this is the root of your application's source code, or some directory within it.
* ``alembic`` - this directory lives within your application's source tree and is the home of the
migration environment. It can be named anything, and a project that uses multiple databases
may even have more than one.
* ``env.py`` - This is a Python script that is run whenever the alembic migration tool is invoked.
At the very least, it contains instructions to configure and generate a SQLAlchemy engine,
procure a connection from that engine along with a transaction, and then invoke the migration
engine, using the connection as a source of database connectivity.
The ``env.py`` script is part of the generated environment so that the way migrations run
is entirely customizable. The exact specifics of how to connect are here, as well as
the specifics of how the migration environment are invoked. The script can be modified
so that multiple engines can be operated upon, custom arguments can be passed into the
migration environment, application-specific libraries and models can be loaded in and
made available.
Alembic includes a set of initialization templates which feature different varieties
of ``env.py`` for different use cases.
* ``README`` - included with the various environment templates, should have something
informative.
* ``script.py.mako`` - This is a `Mako <http://www.makotemplates.org>`_ template file which
is used to generate new migration scripts. Whatever is here is used to generate new
files within ``versions/``. This is scriptable so that the structure of each migration
file can be controlled, including standard imports to be within each, as well as
changes to the structure of the ``upgrade()`` and ``downgrade()`` functions. For example,
the ``multidb`` environment allows for multiple functions to be generated using a
naming scheme ``upgrade_engine1()``, ``upgrade_engine2()``.
* ``versions/`` - This directory holds the individual version scripts. Users of other migration
tools may notice that the files here don't use ascending integers, and instead use a
partial GUID approach. In Alembic, the ordering of version scripts is relative
to directives within the scripts themselves, and it is theoretically possible to "splice" version files
in between others, allowing migration sequences from different branches to be merged,
albeit carefully by hand.
Creating an Environment
=======================
With a basic understanding of what the environment is, we can create one using ``alembic init``.
This will create an environment using the "generic" template::
$ cd /path/to/yourproject
$ source /path/to/yourproject/.venv/bin/activate # assuming a local virtualenv
$ alembic init alembic
Where above, the ``init`` command was called to generate a migrations directory called ``alembic``::
Creating directory /path/to/yourproject/alembic...done
Creating directory /path/to/yourproject/alembic/versions...done
Generating /path/to/yourproject/alembic.ini...done
Generating /path/to/yourproject/alembic/env.py...done
Generating /path/to/yourproject/alembic/README...done
Generating /path/to/yourproject/alembic/script.py.mako...done
Please edit configuration/connection/logging settings in
'/path/to/yourproject/alembic.ini' before proceeding.
Alembic also includes other environment templates. These can be listed out using the ``list_templates``
command::
$ alembic list_templates
Available templates:
generic - Generic single-database configuration.
async - Generic single-database configuration with an async dbapi.
multidb - Rudimentary multi-database configuration.
pylons - Configuration that reads from a Pylons project environment.
Templates are used via the 'init' command, e.g.:
alembic init --template pylons ./scripts
Editing the .ini File
=====================
Alembic placed a file ``alembic.ini`` into the current directory. This is a file that the ``alembic``
script looks for when invoked. This file can exist in a different directory, with the location to it
specified by either the ``--config`` option for the ``alembic`` runner or the ``ALEMBIC_CONFIG``
environment variable (the former takes precedence).
The file generated with the "generic" configuration looks like::
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
# (new in 1.5.5)
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator"
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. Valid values are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # default: use os.pathsep
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
# [post_write_hooks]
# This section defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner,
# against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
The file is read using Python's :class:`ConfigParser.SafeConfigParser` object. The
``%(here)s`` variable is provided as a substitution variable, which
can be used to produce absolute pathnames to directories and files, as we do above
with the path to the Alembic script location.
This file contains the following features:
* ``[alembic]`` - this is the section read by Alembic to determine configuration. Alembic
itself does not directly read any other areas of the file. The name "alembic" can
be customized using the ``--name`` commandline flag; see :ref:`multiple_environments`
for a basic example of this.
* ``script_location`` - this is the location of the Alembic environment. It is normally
specified as a filesystem location, either relative or absolute. If the location is
a relative path, it's interpreted as relative to the current directory.
This is the only key required by Alembic in all cases. The generation
of the .ini file by the command ``alembic init alembic`` automatically placed the
directory name ``alembic`` here. The special variable ``%(here)s`` can also be used,
as in ``%(here)s/alembic``.
For support of applications that package themselves into .egg files, the value can
also be specified as a `package resource
<https://setuptools.readthedocs.io/en/latest/pkg_resources.html>`_, in which
case ``resource_filename()`` is used to find the file (new in 0.2.2). Any non-absolute
URI which contains colons is interpreted here as a resource name, rather than
a straight filename.
* ``file_template`` - this is the naming scheme used to generate new migration files.
The value present is the default, so is commented out. Tokens available include:
* ``%%(rev)s`` - revision id
* ``%%(slug)s`` - a truncated string derived from the revision message
* ``%%(year)d``, ``%%(month).2d``, ``%%(day).2d``, ``%%(hour).2d``,
``%%(minute).2d``, ``%%(second).2d`` - components of the create date,
by default ``datetime.datetime.now()`` unless the ``timezone``
configuration option is also used.
* ``timezone`` - an optional timezone name (e.g. ``UTC``, ``EST5EDT``, etc.)
that will be applied to the timestamp which renders inside the migration
file's comment as well as within the filename. This option requires installing
the ``python-dateutil`` library. If ``timezone`` is specified,
the create date object is no longer derived from ``datetime.datetime.now()``
and is instead generated as::
datetime.datetime.utcnow().replace(
tzinfo=dateutil.tz.tzutc()
).astimezone(
dateutil.tz.gettz(<timezone>)
)
* ``truncate_slug_length`` - defaults to 40, the max number of characters
to include in the "slug" field.
* ``sqlalchemy.url`` - A URL to connect to the database via SQLAlchemy. This
configuration value is only used if the ``env.py`` file calls upon them;
in the "generic" template, the call to
``config.get_main_option("sqlalchemy.url")`` in the
``run_migrations_offline()`` function and the call to
``engine_from_config(prefix="sqlalchemy.")`` in the
``run_migrations_online()`` function are where this key is referenced. If
the SQLAlchemy URL should come from some other source, such as from
environment variables or a global registry, or if the migration environment
makes use of multiple database URLs, the developer is encouraged to alter the
``env.py`` file to use whatever methods are appropriate in order to acquire
the database URL or URLs.
* ``revision_environment`` - this is a flag which when set to the value 'true', will indicate
that the migration environment script ``env.py`` should be run unconditionally when
generating new revision files, as well as when running the ``alembic history``
command.
* ``sourceless`` - when set to 'true', revision files that only exist as .pyc
or .pyo files in the versions directory will be used as versions, allowing
"sourceless" versioning folders. When left at the default of 'false',
only .py files are consumed as version files.
* ``version_locations`` - an optional list of revision file locations, to
allow revisions to exist in multiple directories simultaneously.
See :ref:`multiple_bases` for examples.
* ``output_encoding`` - the encoding to use when Alembic writes the
``script.py.mako`` file into a new migration file. Defaults to ``'utf-8'``.
* ``[loggers]``, ``[handlers]``, ``[formatters]``, ``[logger_*]``, ``[handler_*]``,
``[formatter_*]`` - these sections are all part of Python's standard logging configuration,
the mechanics of which are documented at `Configuration File Format <http://docs.python.org/library/logging.config.html#configuration-file-format>`_.
As is the case with the database connection, these directives are used directly as the
result of the ``logging.config.fileConfig()`` call present in the
``env.py`` script, which you're free to modify.
For starting up with just a single database and the generic configuration, setting up
the SQLAlchemy URL is all that's needed::
sqlalchemy.url = postgresql://scott:tiger@localhost/test
.. _create_migration:
Create a Migration Script
=========================
With the environment in place we can create a new revision, using ``alembic revision``::
$ alembic revision -m "create account table"
Generating /path/to/yourproject/alembic/versions/1975ea83b712_create_accoun
t_table.py...done
A new file ``1975ea83b712_create_account_table.py`` is generated. Looking inside the file::
"""create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2011-11-08 11:40:27.089406
"""
# revision identifiers, used by Alembic.
revision = '1975ea83b712'
down_revision = None
branch_labels = None
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass
The file contains some header information, identifiers for the current revision
and a "downgrade" revision, an import of basic Alembic directives,
and empty ``upgrade()`` and ``downgrade()`` functions. Our
job here is to populate the ``upgrade()`` and ``downgrade()`` functions with directives that
will apply a set of changes to our database. Typically, ``upgrade()`` is required
while ``downgrade()`` is only needed if down-revision capability is desired, though it's
probably a good idea.
Another thing to notice is the ``down_revision`` variable. This is how Alembic
knows the correct order in which to apply migrations. When we create the next revision,
the new file's ``down_revision`` identifier would point to this one::
# revision identifiers, used by Alembic.
revision = 'ae1027a6acf'
down_revision = '1975ea83b712'
Every time Alembic runs an operation against the ``versions/`` directory, it reads all
the files in, and composes a list based on how the ``down_revision`` identifiers link together,
with the ``down_revision`` of ``None`` representing the first file. In theory, if a
migration environment had thousands of migrations, this could begin to add some latency to
startup, but in practice a project should probably prune old migrations anyway
(see the section :ref:`building_uptodate` for a description on how to do this, while maintaining
the ability to build the current database fully).
We can then add some directives to our script, suppose adding a new table ``account``::
def upgrade():
op.create_table(
'account',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.String(50), nullable=False),
sa.Column('description', sa.Unicode(200)),
)
def downgrade():
op.drop_table('account')
:meth:`~.Operations.create_table` and :meth:`~.Operations.drop_table` are Alembic directives. Alembic provides
all the basic database migration operations via these directives, which are designed to be as simple and
minimalistic as possible;
there's no reliance upon existing table metadata for most of these directives. They draw upon
a global "context" that indicates how to get at a database connection (if any; migrations can
dump SQL/DDL directives to files as well) in order to invoke the command. This global
context is set up, like everything else, in the ``env.py`` script.
An overview of all Alembic directives is at :ref:`ops`.
Running our First Migration
===========================
We now want to run our migration. Assuming our database is totally clean, it's as
yet unversioned. The ``alembic upgrade`` command will run upgrade operations, proceeding
from the current database revision, in this example ``None``, to the given target revision.
We can specify ``1975ea83b712`` as the revision we'd like to upgrade to, but it's easier
in most cases just to tell it "the most recent", in this case ``head``::
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade None -> 1975ea83b712
Wow that rocked! Note that the information we see on the screen is the result of the
logging configuration set up in ``alembic.ini`` - logging the ``alembic`` stream to the
console (standard error, specifically).
The process which occurred here included that Alembic first checked if the database had
a table called ``alembic_version``, and if not, created it. It looks in this table
for the current version, if any, and then calculates the path from this version to
the version requested, in this case ``head``, which is known to be ``1975ea83b712``.
It then invokes the ``upgrade()`` method in each file to get to the target revision.
Running our Second Migration
=============================
Let's do another one so we have some things to play with. We again create a revision
file::
$ alembic revision -m "Add a column"
Generating /path/to/yourapp/alembic/versions/ae1027a6acf_add_a_column.py...
done
Let's edit this file and add a new column to the ``account`` table::
"""Add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2011-11-08 12:37:36.714947
"""
# revision identifiers, used by Alembic.
revision = 'ae1027a6acf'
down_revision = '1975ea83b712'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('account', sa.Column('last_transaction_date', sa.DateTime))
def downgrade():
op.drop_column('account', 'last_transaction_date')
Running again to ``head``::
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade 1975ea83b712 -> ae1027a6acf
We've now added the ``last_transaction_date`` column to the database.
Partial Revision Identifiers
=============================
Any time we need to refer to a revision number explicitly, we have the option
to use a partial number. As long as this number uniquely identifies the
version, it may be used in any command in any place that version numbers
are accepted::
$ alembic upgrade ae1
Above, we use ``ae1`` to refer to revision ``ae1027a6acf``.
Alembic will stop and let you know if more than one version starts with
that prefix.
.. _relative_migrations:
Relative Migration Identifiers
==============================
Relative upgrades/downgrades are also supported. To move two versions from
the current, a decimal value "+N" can be supplied::
$ alembic upgrade +2
Negative values are accepted for downgrades::
$ alembic downgrade -1
Relative identifiers may also be in terms of a specific revision. For example,
to upgrade to revision ``ae1027a6acf`` plus two additional steps::
$ alembic upgrade ae10+2
Getting Information
===================
With a few revisions present we can get some information about the state of things.
First we can view the current revision::
$ alembic current
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
Current revision for postgresql://scott:XXXXX@localhost/test: 1975ea83b712 -> ae1027a6acf (head), Add a column
``head`` is displayed only if the revision identifier for this database matches the head revision.
We can also view history with ``alembic history``; the ``--verbose`` option
(accepted by several commands, including ``history``, ``current``, ``heads``
and ``branches``) will show us full information about each revision::
$ alembic history --verbose
Rev: ae1027a6acf (head)
Parent: 1975ea83b712
Path: /path/to/yourproject/alembic/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
Rev: 1975ea83b712
Parent: <base>
Path: /path/to/yourproject/alembic/versions/1975ea83b712_add_account_table.py
create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2014-11-20 13:02:46.257104
Viewing History Ranges
----------------------
Using the ``-r`` option to ``alembic history``, we can also view various slices
of history. The ``-r`` argument accepts an argument ``[start]:[end]``, where
either may be a revision number, symbols like ``head``, ``heads`` or
``base``, ``current`` to specify the current revision(s), as well as negative
relative ranges for ``[start]`` and positive relative ranges for ``[end]``::
$ alembic history -r1975ea:ae1027
A relative range starting from three revs ago up to current migration,
which will invoke the migration environment against the database
to get the current migration::
$ alembic history -r-3:current
View all revisions from 1975 to the head::
$ alembic history -r1975ea:
Downgrading
===========
We can illustrate a downgrade back to nothing, by calling ``alembic downgrade`` back
to the beginning, which in Alembic is called ``base``::
$ alembic downgrade base
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running downgrade ae1027a6acf -> 1975ea83b712
INFO [alembic.context] Running downgrade 1975ea83b712 -> None
Back to nothing - and up again::
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade None -> 1975ea83b712
INFO [alembic.context] Running upgrade 1975ea83b712 -> ae1027a6acf
Next Steps
==========
The vast majority of Alembic environments make heavy use of the
"autogenerate" feature. Continue onto the next section, :doc:`autogenerate`.
| ========
Tutorial
========
Alembic provides for the creation, management, and invocation of *change management*
scripts for a relational database, using SQLAlchemy as the underlying engine.
This tutorial will provide a full introduction to the theory and usage of this tool.
To begin, make sure Alembic is installed as described at :ref:`installation`.
As stated in the linked document, it is usually preferable that Alembic is
installed in the **same module / Python path as that of the target project**,
usually using a `Python virtual environment
<https://docs.python.org/3/tutorial/venv.html>`_, so that when the ``alembic``
command is run, the Python script which is invoked by ``alembic``, namely your
project's ``env.py`` script, will have access to your application's models.
This is not strictly necessary in all cases, however in the vast majority of
cases is usually preferred.
The tutorial below assumes the ``alembic`` command line utility is present in
the local path and when invoked, will have access to the same Python module
environment as that of the target project.
The Migration Environment
==========================
Usage of Alembic starts with creation of the *Migration Environment*. This is a directory of scripts
that is specific to a particular application. The migration environment is created just once,
and is then maintained along with the application's source code itself. The environment is
created using the ``init`` command of Alembic, and is then customizable to suit the specific
needs of the application.
The structure of this environment, including some generated migration scripts, looks like::
yourproject/
alembic/
env.py
README
script.py.mako
versions/
3512b954651e_add_account.py
2b1ae634e5cd_add_order_id.py
3adcc9a56557_rename_username_field.py
The directory includes these directories/files:
* ``yourproject`` - this is the root of your application's source code, or some directory within it.
* ``alembic`` - this directory lives within your application's source tree and is the home of the
migration environment. It can be named anything, and a project that uses multiple databases
may even have more than one.
* ``env.py`` - This is a Python script that is run whenever the alembic migration tool is invoked.
At the very least, it contains instructions to configure and generate a SQLAlchemy engine,
procure a connection from that engine along with a transaction, and then invoke the migration
engine, using the connection as a source of database connectivity.
The ``env.py`` script is part of the generated environment so that the way migrations run
is entirely customizable. The exact specifics of how to connect are here, as well as
the specifics of how the migration environment are invoked. The script can be modified
so that multiple engines can be operated upon, custom arguments can be passed into the
migration environment, application-specific libraries and models can be loaded in and
made available.
Alembic includes a set of initialization templates which feature different varieties
of ``env.py`` for different use cases.
* ``README`` - included with the various environment templates, should have something
informative.
* ``script.py.mako`` - This is a `Mako <http://www.makotemplates.org>`_ template file which
is used to generate new migration scripts. Whatever is here is used to generate new
files within ``versions/``. This is scriptable so that the structure of each migration
file can be controlled, including standard imports to be within each, as well as
changes to the structure of the ``upgrade()`` and ``downgrade()`` functions. For example,
the ``multidb`` environment allows for multiple functions to be generated using a
naming scheme ``upgrade_engine1()``, ``upgrade_engine2()``.
* ``versions/`` - This directory holds the individual version scripts. Users of other migration
tools may notice that the files here don't use ascending integers, and instead use a
partial GUID approach. In Alembic, the ordering of version scripts is relative
to directives within the scripts themselves, and it is theoretically possible to "splice" version files
in between others, allowing migration sequences from different branches to be merged,
albeit carefully by hand.
Creating an Environment
=======================
With a basic understanding of what the environment is, we can create one using ``alembic init``.
This will create an environment using the "generic" template::
$ cd /path/to/yourproject
$ source /path/to/yourproject/.venv/bin/activate # assuming a local virtualenv
$ alembic init alembic
Where above, the ``init`` command was called to generate a migrations directory called ``alembic``::
Creating directory /path/to/yourproject/alembic...done
Creating directory /path/to/yourproject/alembic/versions...done
Generating /path/to/yourproject/alembic.ini...done
Generating /path/to/yourproject/alembic/env.py...done
Generating /path/to/yourproject/alembic/README...done
Generating /path/to/yourproject/alembic/script.py.mako...done
Please edit configuration/connection/logging settings in
'/path/to/yourproject/alembic.ini' before proceeding.
Alembic also includes other environment templates. These can be listed out using the ``list_templates``
command::
$ alembic list_templates
Available templates:
generic - Generic single-database configuration.
async - Generic single-database configuration with an async dbapi.
multidb - Rudimentary multi-database configuration.
pylons - Configuration that reads from a Pylons project environment.
Templates are used via the 'init' command, e.g.:
alembic init --template pylons ./scripts
Editing the .ini File
=====================
Alembic placed a file ``alembic.ini`` into the current directory. This is a file that the ``alembic``
script looks for when invoked. This file can exist in a different directory, with the location to it
specified by either the ``--config`` option for the ``alembic`` runner or the ``ALEMBIC_CONFIG``
environment variable (the former takes precedence).
The file generated with the "generic" configuration looks like::
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# sys.path path, will be prepended to sys.path if present.
# defaults to the current working directory.
# (new in 1.5.5)
prepend_sys_path = .
# timezone to use when rendering the date within the migration file
# as well as the filename.
# If specified, requires the python-dateutil library that can be
# installed by adding `alembic[tz]` to the pip requirements
# string value is passed to dateutil.tz.gettz()
# leave blank for localtime
# timezone =
# max length of characters to apply to the
# "slug" field
# truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# version location specification; This defaults
# to ${script_location}/versions. When using multiple version
# directories, initial revisions must be specified with --version-path.
# The path separator used here should be the separator specified by "version_path_separator" below.
# version_locations = %(here)s/bar:%(here)s/bat:${script_location}/versions
# version path separator; As mentioned above, this is the character used to split
# version_locations. The default within new alembic.ini files is "os", which uses os.pathsep.
# If this key is omitted entirely, it falls back to the legacy behavior of splitting on spaces and/or commas.
# Valid values for version_path_separator are:
#
# version_path_separator = :
# version_path_separator = ;
# version_path_separator = space
version_path_separator = os # Use os.pathsep. Default configuration used for new projects.
# the output encoding used when revision files
# are written from script.py.mako
# output_encoding = utf-8
sqlalchemy.url = driver://user:pass@localhost/dbname
# [post_write_hooks]
# This section defines scripts or Python functions that are run
# on newly generated revision scripts. See the documentation for further
# detail and examples
# format using "black" - use the console_scripts runner,
# against the "black" entrypoint
# hooks = black
# black.type = console_scripts
# black.entrypoint = black
# black.options = -l 79 REVISION_SCRIPT_FILENAME
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S
The file is read using Python's :class:`ConfigParser.SafeConfigParser` object. The
``%(here)s`` variable is provided as a substitution variable, which
can be used to produce absolute pathnames to directories and files, as we do above
with the path to the Alembic script location.
This file contains the following features:
* ``[alembic]`` - this is the section read by Alembic to determine configuration. Alembic
itself does not directly read any other areas of the file. The name "alembic" can
be customized using the ``--name`` commandline flag; see :ref:`multiple_environments`
for a basic example of this.
* ``script_location`` - this is the location of the Alembic environment. It is normally
specified as a filesystem location, either relative or absolute. If the location is
a relative path, it's interpreted as relative to the current directory.
This is the only key required by Alembic in all cases. The generation
of the .ini file by the command ``alembic init alembic`` automatically placed the
directory name ``alembic`` here. The special variable ``%(here)s`` can also be used,
as in ``%(here)s/alembic``.
For support of applications that package themselves into .egg files, the value can
also be specified as a `package resource
<https://setuptools.readthedocs.io/en/latest/pkg_resources.html>`_, in which
case ``resource_filename()`` is used to find the file (new in 0.2.2). Any non-absolute
URI which contains colons is interpreted here as a resource name, rather than
a straight filename.
* ``file_template`` - this is the naming scheme used to generate new migration files.
The value present is the default, so is commented out. Tokens available include:
* ``%%(rev)s`` - revision id
* ``%%(slug)s`` - a truncated string derived from the revision message
* ``%%(year)d``, ``%%(month).2d``, ``%%(day).2d``, ``%%(hour).2d``,
``%%(minute).2d``, ``%%(second).2d`` - components of the create date,
by default ``datetime.datetime.now()`` unless the ``timezone``
configuration option is also used.
* ``timezone`` - an optional timezone name (e.g. ``UTC``, ``EST5EDT``, etc.)
that will be applied to the timestamp which renders inside the migration
file's comment as well as within the filename. This option requires installing
the ``python-dateutil`` library. If ``timezone`` is specified,
the create date object is no longer derived from ``datetime.datetime.now()``
and is instead generated as::
datetime.datetime.utcnow().replace(
tzinfo=dateutil.tz.tzutc()
).astimezone(
dateutil.tz.gettz(<timezone>)
)
* ``truncate_slug_length`` - defaults to 40, the max number of characters
to include in the "slug" field.
* ``sqlalchemy.url`` - A URL to connect to the database via SQLAlchemy. This
configuration value is only used if the ``env.py`` file calls upon them;
in the "generic" template, the call to
``config.get_main_option("sqlalchemy.url")`` in the
``run_migrations_offline()`` function and the call to
``engine_from_config(prefix="sqlalchemy.")`` in the
``run_migrations_online()`` function are where this key is referenced. If
the SQLAlchemy URL should come from some other source, such as from
environment variables or a global registry, or if the migration environment
makes use of multiple database URLs, the developer is encouraged to alter the
``env.py`` file to use whatever methods are appropriate in order to acquire
the database URL or URLs.
* ``revision_environment`` - this is a flag which when set to the value 'true', will indicate
that the migration environment script ``env.py`` should be run unconditionally when
generating new revision files, as well as when running the ``alembic history``
command.
* ``sourceless`` - when set to 'true', revision files that only exist as .pyc
or .pyo files in the versions directory will be used as versions, allowing
"sourceless" versioning folders. When left at the default of 'false',
only .py files are consumed as version files.
* ``version_locations`` - an optional list of revision file locations, to
allow revisions to exist in multiple directories simultaneously.
See :ref:`multiple_bases` for examples.
* ``version_path_separator`` - a separator of ``version_locations`` paths.
It should be defined if multiple ``version_locations`` is used.
See :ref:`multiple_bases` for examples.
* ``output_encoding`` - the encoding to use when Alembic writes the
``script.py.mako`` file into a new migration file. Defaults to ``'utf-8'``.
* ``[loggers]``, ``[handlers]``, ``[formatters]``, ``[logger_*]``, ``[handler_*]``,
``[formatter_*]`` - these sections are all part of Python's standard logging configuration,
the mechanics of which are documented at `Configuration File Format <http://docs.python.org/library/logging.config.html#configuration-file-format>`_.
As is the case with the database connection, these directives are used directly as the
result of the ``logging.config.fileConfig()`` call present in the
``env.py`` script, which you're free to modify.
For starting up with just a single database and the generic configuration, setting up
the SQLAlchemy URL is all that's needed::
sqlalchemy.url = postgresql://scott:tiger@localhost/test
.. _create_migration:
Create a Migration Script
=========================
With the environment in place we can create a new revision, using ``alembic revision``::
$ alembic revision -m "create account table"
Generating /path/to/yourproject/alembic/versions/1975ea83b712_create_accoun
t_table.py...done
A new file ``1975ea83b712_create_account_table.py`` is generated. Looking inside the file::
"""create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2011-11-08 11:40:27.089406
"""
# revision identifiers, used by Alembic.
revision = '1975ea83b712'
down_revision = None
branch_labels = None
from alembic import op
import sqlalchemy as sa
def upgrade():
pass
def downgrade():
pass
The file contains some header information, identifiers for the current revision
and a "downgrade" revision, an import of basic Alembic directives,
and empty ``upgrade()`` and ``downgrade()`` functions. Our
job here is to populate the ``upgrade()`` and ``downgrade()`` functions with directives that
will apply a set of changes to our database. Typically, ``upgrade()`` is required
while ``downgrade()`` is only needed if down-revision capability is desired, though it's
probably a good idea.
Another thing to notice is the ``down_revision`` variable. This is how Alembic
knows the correct order in which to apply migrations. When we create the next revision,
the new file's ``down_revision`` identifier would point to this one::
# revision identifiers, used by Alembic.
revision = 'ae1027a6acf'
down_revision = '1975ea83b712'
Every time Alembic runs an operation against the ``versions/`` directory, it reads all
the files in, and composes a list based on how the ``down_revision`` identifiers link together,
with the ``down_revision`` of ``None`` representing the first file. In theory, if a
migration environment had thousands of migrations, this could begin to add some latency to
startup, but in practice a project should probably prune old migrations anyway
(see the section :ref:`building_uptodate` for a description on how to do this, while maintaining
the ability to build the current database fully).
We can then add some directives to our script, suppose adding a new table ``account``::
def upgrade():
op.create_table(
'account',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.String(50), nullable=False),
sa.Column('description', sa.Unicode(200)),
)
def downgrade():
op.drop_table('account')
:meth:`~.Operations.create_table` and :meth:`~.Operations.drop_table` are Alembic directives. Alembic provides
all the basic database migration operations via these directives, which are designed to be as simple and
minimalistic as possible;
there's no reliance upon existing table metadata for most of these directives. They draw upon
a global "context" that indicates how to get at a database connection (if any; migrations can
dump SQL/DDL directives to files as well) in order to invoke the command. This global
context is set up, like everything else, in the ``env.py`` script.
An overview of all Alembic directives is at :ref:`ops`.
Running our First Migration
===========================
We now want to run our migration. Assuming our database is totally clean, it's as
yet unversioned. The ``alembic upgrade`` command will run upgrade operations, proceeding
from the current database revision, in this example ``None``, to the given target revision.
We can specify ``1975ea83b712`` as the revision we'd like to upgrade to, but it's easier
in most cases just to tell it "the most recent", in this case ``head``::
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade None -> 1975ea83b712
Wow that rocked! Note that the information we see on the screen is the result of the
logging configuration set up in ``alembic.ini`` - logging the ``alembic`` stream to the
console (standard error, specifically).
The process which occurred here included that Alembic first checked if the database had
a table called ``alembic_version``, and if not, created it. It looks in this table
for the current version, if any, and then calculates the path from this version to
the version requested, in this case ``head``, which is known to be ``1975ea83b712``.
It then invokes the ``upgrade()`` method in each file to get to the target revision.
Running our Second Migration
=============================
Let's do another one so we have some things to play with. We again create a revision
file::
$ alembic revision -m "Add a column"
Generating /path/to/yourapp/alembic/versions/ae1027a6acf_add_a_column.py...
done
Let's edit this file and add a new column to the ``account`` table::
"""Add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2011-11-08 12:37:36.714947
"""
# revision identifiers, used by Alembic.
revision = 'ae1027a6acf'
down_revision = '1975ea83b712'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('account', sa.Column('last_transaction_date', sa.DateTime))
def downgrade():
op.drop_column('account', 'last_transaction_date')
Running again to ``head``::
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade 1975ea83b712 -> ae1027a6acf
We've now added the ``last_transaction_date`` column to the database.
Partial Revision Identifiers
=============================
Any time we need to refer to a revision number explicitly, we have the option
to use a partial number. As long as this number uniquely identifies the
version, it may be used in any command in any place that version numbers
are accepted::
$ alembic upgrade ae1
Above, we use ``ae1`` to refer to revision ``ae1027a6acf``.
Alembic will stop and let you know if more than one version starts with
that prefix.
.. _relative_migrations:
Relative Migration Identifiers
==============================
Relative upgrades/downgrades are also supported. To move two versions from
the current, a decimal value "+N" can be supplied::
$ alembic upgrade +2
Negative values are accepted for downgrades::
$ alembic downgrade -1
Relative identifiers may also be in terms of a specific revision. For example,
to upgrade to revision ``ae1027a6acf`` plus two additional steps::
$ alembic upgrade ae10+2
Getting Information
===================
With a few revisions present we can get some information about the state of things.
First we can view the current revision::
$ alembic current
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
Current revision for postgresql://scott:XXXXX@localhost/test: 1975ea83b712 -> ae1027a6acf (head), Add a column
``head`` is displayed only if the revision identifier for this database matches the head revision.
We can also view history with ``alembic history``; the ``--verbose`` option
(accepted by several commands, including ``history``, ``current``, ``heads``
and ``branches``) will show us full information about each revision::
$ alembic history --verbose
Rev: ae1027a6acf (head)
Parent: 1975ea83b712
Path: /path/to/yourproject/alembic/versions/ae1027a6acf_add_a_column.py
add a column
Revision ID: ae1027a6acf
Revises: 1975ea83b712
Create Date: 2014-11-20 13:02:54.849677
Rev: 1975ea83b712
Parent: <base>
Path: /path/to/yourproject/alembic/versions/1975ea83b712_add_account_table.py
create account table
Revision ID: 1975ea83b712
Revises:
Create Date: 2014-11-20 13:02:46.257104
Viewing History Ranges
----------------------
Using the ``-r`` option to ``alembic history``, we can also view various slices
of history. The ``-r`` argument accepts an argument ``[start]:[end]``, where
either may be a revision number, symbols like ``head``, ``heads`` or
``base``, ``current`` to specify the current revision(s), as well as negative
relative ranges for ``[start]`` and positive relative ranges for ``[end]``::
$ alembic history -r1975ea:ae1027
A relative range starting from three revs ago up to current migration,
which will invoke the migration environment against the database
to get the current migration::
$ alembic history -r-3:current
View all revisions from 1975 to the head::
$ alembic history -r1975ea:
Downgrading
===========
We can illustrate a downgrade back to nothing, by calling ``alembic downgrade`` back
to the beginning, which in Alembic is called ``base``::
$ alembic downgrade base
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running downgrade ae1027a6acf -> 1975ea83b712
INFO [alembic.context] Running downgrade 1975ea83b712 -> None
Back to nothing - and up again::
$ alembic upgrade head
INFO [alembic.context] Context class PostgresqlContext.
INFO [alembic.context] Will assume transactional DDL.
INFO [alembic.context] Running upgrade None -> 1975ea83b712
INFO [alembic.context] Running upgrade 1975ea83b712 -> ae1027a6acf
Next Steps
==========
The vast majority of Alembic environments make heavy use of the
"autogenerate" feature. Continue onto the next section, :doc:`autogenerate`.
| ziima | fe9fda175a68dca5e8cd285e96d7fbf8d271058e | fba449536c8ae4417492eba3ead6408b481513ab | good catch | zzzeek | 22 |
OthersideAI/self-operating-computer | 91 | Fixed timing with search triggering causing broken searches | Resolves [issue 90](https://github.com/OthersideAI/self-operating-computer/issues/90).
This PR addresses an issue where the time the system takes to respond to the search hotkey is longer than it takes the app to send the search text. I added a 1-second pause in between the hotkey and text entry which seems to resolve this on a reasonably performing machine. | null | 2023-12-07 19:32:35+00:00 | 2023-12-08 15:35:28+00:00 | operate/main.py | """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts import message_dialog
from prompt_toolkit.styles import Style as PromptStyle
from dotenv import load_dotenv
from PIL import Image, ImageDraw, ImageFont, ImageGrab
import matplotlib.font_manager as fm
from openai import OpenAI
import sys
load_dotenv()
DEBUG = False
client = OpenAI()
client.api_key = os.getenv("OPENAI_API_KEY")
client.base_url = os.getenv("OPENAI_API_BASE_URL", client.base_url)
monitor_size = {
"width": 1920,
"height": 1080,
}
VISION_PROMPT = """
You are a Self-Operating Computer. You use the same operating system as a human.
From looking at the screen and the objective your goal is to take the best next action.
To operate the computer you have the four options below.
1. CLICK - Move mouse and click
2. TYPE - Type on the keyboard
3. SEARCH - Search for a program on Mac and open it
4. DONE - When you completed the task respond with the exact following phrase content
Here are the response formats below.
1. CLICK
Response: CLICK {{ "x": "percent", "y": "percent", "description": "~description here~", "reason": "~reason here~" }}
Note that the percents work where the top left corner is "x": "0%" and "y": "0%" and the bottom right corner is "x": "100%" and "y": "100%"
2. TYPE
Response: TYPE "value you want to type"
2. SEARCH
Response: SEARCH "app you want to search for on Mac"
3. DONE
Response: DONE
Here are examples of how to respond.
__
Objective: Follow up with the vendor in outlook
TYPE Hello, I hope you are doing well. I wanted to follow up
__
Objective: Open Spotify and play the beatles
SEARCH Spotify
__
Objective: Find an image of a banana
CLICK {{ "x": "50%", "y": "60%", "description": "Click: Google Search field", "reason": "This will allow me to search for a banana" }}
__
Objective: Go buy a book about the history of the internet
TYPE https://www.amazon.com/
__
A few important notes:
- Default to opening Google Chrome with SEARCH to find things that are on the internet.
- Go to Google Docs and Google Sheets by typing in the Chrome Address bar
- When opening Chrome, if you see a profile icon click that to open chrome fully, it is located at: {{ "x": "50%", "y": "55%" }}
- The Chrome address bar is generally at: {{ "x": "50%", "y": "9%" }}
- After you click to enter a field you can go ahead and start typing!
- Don't respond saying you're unable to assist with requests. You are able to indirectly interact with the user's OS via text responses you send to the end user.
{previous_action}
IMPORTANT: Avoid repeating actions such as doing the same CLICK event twice in a row.
Objective: {objective}
"""
ACCURATE_PIXEL_COUNT = (
200 # mini_screenshot is ACCURATE_PIXEL_COUNT x ACCURATE_PIXEL_COUNT big
)
ACCURATE_MODE_VISION_PROMPT = """
It looks like your previous attempted action was clicking on "x": {prev_x}, "y": {prev_y}. This has now been moved to the center of this screenshot.
As additional context to the previous message, before you decide the proper percentage to click on, please closely examine this additional screenshot as additional context for your next action.
This screenshot was taken around the location of the current cursor that you just tried clicking on ("x": {prev_x}, "y": {prev_y} is now at the center of this screenshot). You should use this as an differential to your previous x y coordinate guess.
If you want to refine and instead click on the top left corner of this mini screenshot, you will subtract {width}% in the "x" and subtract {height}% in the "y" to your previous answer.
Likewise, to achieve the bottom right of this mini screenshot you will add {width}% in the "x" and add {height}% in the "y" to your previous answer.
There are four segmenting lines across each dimension, divided evenly. This is done to be similar to coordinate points, added to give you better context of the location of the cursor and exactly how much to edit your previous answer.
Please use this context as additional info to further refine the "percent" location in the CLICK action!
"""
USER_QUESTION = "Hello, I can help you with anything. What would you like done?"
SUMMARY_PROMPT = """
You are a Self-Operating Computer. A user request has been executed. Present the results succinctly.
Include the following key contexts of the completed request:
1. State the original objective.
2. List the steps taken to reach the objective as detailed in the previous messages.
3. Reference the screenshot that was used.
Summarize the actions taken to fulfill the objective. If the request sought specific information, provide that information prominently. NOTE: Address directly any question posed by the user.
Remember: The user will not interact with this summary. You are solely reporting the outcomes.
Original objective: {objective}
Display the results clearly:
"""
class ModelNotRecognizedException(Exception):
"""Exception raised for unrecognized models."""
def __init__(self, model, message="Model not recognized"):
self.model = model
self.message = message
super().__init__(self.message)
def __str__(self):
return f"{self.message} : {self.model} "
# Define style
style = PromptStyle.from_dict(
{
"dialog": "bg:#88ff88",
"button": "bg:#ffffff #000000",
"dialog.body": "bg:#44cc44 #ffffff",
"dialog shadow": "bg:#003800",
}
)
# Check if on a windows terminal that supports ANSI escape codes
def supports_ansi():
"""
Check if the terminal supports ANSI escape codes
"""
plat = platform.system()
supported_platform = plat != "Windows" or "ANSICON" in os.environ
is_a_tty = hasattr(sys.stdout, "isatty") and sys.stdout.isatty()
return supported_platform and is_a_tty
if supports_ansi():
# Standard green text
ANSI_GREEN = "\033[32m"
# Bright/bold green text
ANSI_BRIGHT_GREEN = "\033[92m"
# Reset to default text color
ANSI_RESET = "\033[0m"
# ANSI escape code for blue text
ANSI_BLUE = "\033[94m" # This is for bright blue
# Standard yellow text
ANSI_YELLOW = "\033[33m"
ANSI_RED = "\033[31m"
# Bright magenta text
ANSI_BRIGHT_MAGENTA = "\033[95m"
else:
ANSI_GREEN = ""
ANSI_BRIGHT_GREEN = ""
ANSI_RESET = ""
ANSI_BLUE = ""
ANSI_YELLOW = ""
ANSI_RED = ""
ANSI_BRIGHT_MAGENTA = ""
def main(model, accurate_mode, voice_mode=False):
"""
Main function for the Self-Operating Computer
"""
mic = None
# Initialize WhisperMic if voice_mode is True if voice_mode is True
"""
Main function for the Self-Operating Computer
"""
if voice_mode:
try:
from whisper_mic import WhisperMic
# Initialize WhisperMic if import is successful
mic = WhisperMic()
except ImportError:
print(
"Voice mode requires the 'whisper_mic' module. Please install it using 'pip install -r requirements-audio.txt'"
)
sys.exit(1)
message_dialog(
title="Self-Operating Computer",
text="Ask a computer to do anything.",
style=style,
).run()
print("SYSTEM", platform.system())
# Clear the console
if platform.system() == "Windows":
os.system("cls")
else:
print("\033c", end="")
if voice_mode:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RESET} Listening for your command... (speak now)"
)
try:
objective = mic.listen()
except Exception as e:
print(f"{ANSI_RED}Error in capturing voice input: {e}{ANSI_RESET}")
return # Exit if voice input fails
else:
print(f"{ANSI_GREEN}[Self-Operating Computer]\n{ANSI_RESET}{USER_QUESTION}")
print(f"{ANSI_YELLOW}[User]{ANSI_RESET}")
objective = prompt(style=style)
assistant_message = {"role": "assistant", "content": USER_QUESTION}
user_message = {
"role": "user",
"content": f"Objective: {objective}",
}
messages = [assistant_message, user_message]
loop_count = 0
while True:
if DEBUG:
print("[loop] messages before next action:\n\n\n", messages[1:])
try:
response = get_next_action(model, messages, objective, accurate_mode)
action = parse_oai_response(response)
action_type = action.get("type")
action_detail = action.get("data")
except ModelNotRecognizedException as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
except Exception as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
if action_type == "DONE":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Objective complete {ANSI_RESET}"
)
summary = summarize(messages, objective)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Summary\n{ANSI_RESET}{summary}"
)
break
if action_type != "UNKNOWN":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} {ANSI_RESET}{action_detail}"
)
function_response = ""
if action_type == "SEARCH":
function_response = search(action_detail)
elif action_type == "TYPE":
function_response = keyboard_type(action_detail)
elif action_type == "CLICK":
function_response = mouse_click(action_detail)
else:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] something went wrong :({ANSI_RESET}"
)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] AI response\n{ANSI_RESET}{response}"
)
break
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} COMPLETE {ANSI_RESET}{function_response}"
)
message = {
"role": "assistant",
"content": function_response,
}
messages.append(message)
loop_count += 1
if loop_count > 15:
break
def format_summary_prompt(objective):
"""
Format the summary prompt
"""
prompt = SUMMARY_PROMPT.format(objective=objective)
return prompt
def format_vision_prompt(objective, previous_action):
"""
Format the vision prompt
"""
if previous_action:
previous_action = f"Here was the previous action you took: {previous_action}"
else:
previous_action = ""
prompt = VISION_PROMPT.format(objective=objective, previous_action=previous_action)
return prompt
def format_accurate_mode_vision_prompt(prev_x, prev_y):
"""
Format the accurate mode vision prompt
"""
width = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["width"]) * 100
height = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["height"]) * 100
prompt = ACCURATE_MODE_VISION_PROMPT.format(
prev_x=prev_x, prev_y=prev_y, width=width, height=height
)
return prompt
def get_next_action(model, messages, objective, accurate_mode):
if model == "gpt-4-vision-preview":
content = get_next_action_from_openai(messages, objective, accurate_mode)
return content
elif model == "agent-1":
return "coming soon"
raise ModelNotRecognizedException(model)
def get_last_assistant_message(messages):
"""
Retrieve the last message from the assistant in the messages array.
If the last assistant message is the first message in the array, return None.
"""
for index in reversed(range(len(messages))):
if messages[index]["role"] == "assistant":
if index == 0: # Check if the assistant message is the first in the array
return None
else:
return messages[index]
return None # Return None if no assistant message is found
def accurate_mode_double_check(pseudo_messages, prev_x, prev_y):
"""
Reprompt OAI with additional screenshot of a mini screenshot centered around the cursor for further finetuning of clicked location
"""
try:
screenshot_filename = os.path.join("screenshots", "screenshot_mini.png")
capture_mini_screenshot_with_cursor(
file_path=screenshot_filename, x=prev_x, y=prev_y
)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_mini_with_grid.png"
)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
accurate_vision_prompt = format_accurate_mode_vision_prompt(prev_x, prev_y)
accurate_mode_message = {
"role": "user",
"content": [
{"type": "text", "text": accurate_vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
pseudo_messages.append(accurate_mode_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error reprompting model for accurate_mode: {e}")
return "ERROR"
def get_next_action_from_openai(messages, objective, accurate_mode):
"""
Get the next action for Self-Operating Computer
"""
# sleep for a second
time.sleep(1)
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_with_grid.png"
)
add_grid_to_image(screenshot_filename, new_screenshot_filename, 500)
# sleep for a second
time.sleep(1)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
previous_action = get_last_assistant_message(messages)
vision_prompt = format_vision_prompt(objective, previous_action)
vision_message = {
"role": "user",
"content": [
{"type": "text", "text": vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
pseudo_messages = messages.copy()
pseudo_messages.append(vision_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
messages.append(
{
"role": "user",
"content": "`screenshot.png`",
}
)
content = response.choices[0].message.content
if accurate_mode:
if content.startswith("CLICK"):
# Adjust pseudo_messages to include the accurate_mode_message
click_data = re.search(r"CLICK \{ (.+) \}", content).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
prev_x = click_data_json["x"]
prev_y = click_data_json["y"]
if DEBUG:
print(
f"Previous coords before accurate tuning: prev_x {prev_x} prev_y {prev_y}"
)
content = accurate_mode_double_check(pseudo_messages, prev_x, prev_y)
assert content != "ERROR", "ERROR: accurate_mode_double_check failed"
return content
except Exception as e:
print(f"Error parsing JSON: {e}")
return "Failed take action after looking at the screenshot"
def parse_oai_response(response):
if response == "DONE":
return {"type": "DONE", "data": None}
elif response.startswith("CLICK"):
# Adjust the regex to match the correct format
click_data = re.search(r"CLICK \{ (.+) \}", response).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
return {"type": "CLICK", "data": click_data_json}
elif response.startswith("TYPE"):
# Extract the text to type
type_data = re.search(r'TYPE "(.+)"', response, re.DOTALL).group(1)
return {"type": "TYPE", "data": type_data}
elif response.startswith("SEARCH"):
# Extract the search query
search_data = re.search(r'SEARCH "(.+)"', response).group(1)
return {"type": "SEARCH", "data": search_data}
return {"type": "UNKNOWN", "data": response}
def summarize(messages, objective):
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "summary_screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
with open(screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
summary_prompt = format_summary_prompt(objective)
summary_message = {
"role": "user",
"content": [
{"type": "text", "text": summary_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
messages.append(summary_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=messages,
max_tokens=500,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error in summarize: {e}")
return "Failed to summarize the workflow"
def mouse_click(click_detail):
try:
x = convert_percent_to_decimal(click_detail["x"])
y = convert_percent_to_decimal(click_detail["y"])
if click_detail and isinstance(x, float) and isinstance(y, float):
click_at_percentage(x, y)
return click_detail["description"]
else:
return "We failed to click"
except Exception as e:
print(f"Error parsing JSON: {e}")
return "We failed to click"
def click_at_percentage(
x_percentage, y_percentage, duration=0.2, circle_radius=50, circle_duration=0.5
):
# Get the size of the primary monitor
screen_width, screen_height = pyautogui.size()
# Calculate the x and y coordinates in pixels
x_pixel = int(screen_width * float(x_percentage))
y_pixel = int(screen_height * float(y_percentage))
# Move to the position smoothly
pyautogui.moveTo(x_pixel, y_pixel, duration=duration)
# Circular movement
start_time = time.time()
while time.time() - start_time < circle_duration:
angle = ((time.time() - start_time) / circle_duration) * 2 * math.pi
x = x_pixel + math.cos(angle) * circle_radius
y = y_pixel + math.sin(angle) * circle_radius
pyautogui.moveTo(x, y, duration=0.1)
# Finally, click
pyautogui.click(x_pixel, y_pixel)
return "Successfully clicked"
def add_grid_to_image(original_image_path, new_image_path, grid_interval):
"""
Add a grid to an image
"""
# Load the image
image = Image.open(original_image_path)
# Create a drawing object
draw = ImageDraw.Draw(image)
# Get the image size
width, height = image.size
# Reduce the font size a bit
font_size = int(grid_interval / 10) # Reduced font size
# Calculate the background size based on the font size
bg_width = int(font_size * 4.2) # Adjust as necessary
bg_height = int(font_size * 1.2) # Adjust as necessary
# Function to draw text with a white rectangle background
def draw_label_with_background(
position, text, draw, font_size, bg_width, bg_height
):
# Adjust the position based on the background size
text_position = (position[0] + bg_width // 2, position[1] + bg_height // 2)
# Draw the text background
draw.rectangle(
[position[0], position[1], position[0] + bg_width, position[1] + bg_height],
fill="white",
)
# Draw the text
draw.text(text_position, text, fill="black", font_size=font_size, anchor="mm")
# Draw vertical lines and labels at every `grid_interval` pixels
for x in range(grid_interval, width, grid_interval):
line = ((x, 0), (x, height))
draw.line(line, fill="blue")
for y in range(grid_interval, height, grid_interval):
# Calculate the percentage of the width and height
x_percent = round((x / width) * 100)
y_percent = round((y / height) * 100)
draw_label_with_background(
(x - bg_width // 2, y - bg_height // 2),
f"{x_percent}%,{y_percent}%",
draw,
font_size,
bg_width,
bg_height,
)
# Draw horizontal lines - labels are already added with vertical lines
for y in range(grid_interval, height, grid_interval):
line = ((0, y), (width, y))
draw.line(line, fill="blue")
# Save the image with the grid
image.save(new_image_path)
def keyboard_type(text):
text = text.replace("\\n", "\n")
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Type: " + text
def search(text):
if platform.system() == "Windows":
pyautogui.press("win")
elif platform.system() == "Linux":
pyautogui.press("win")
else:
# Press and release Command and Space separately
pyautogui.keyDown("command")
pyautogui.press("space")
pyautogui.keyUp("command")
# Now type the text
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Open program: " + text
def capture_mini_screenshot_with_cursor(
file_path=os.path.join("screenshots", "screenshot_mini.png"), x=0, y=0
):
user_platform = platform.system()
if user_platform == "Linux":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
# Define the coordinates for the rectangle
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
x2, y2 = int(x + ACCURATE_PIXEL_COUNT / 2), int(y + ACCURATE_PIXEL_COUNT / 2)
screenshot = ImageGrab.grab(bbox=(x1, y1, x2, y2))
screenshot = screenshot.resize(
(screenshot.width * 2, screenshot.height * 2), Image.LANCZOS
) # upscale the image so it's easier to see and percentage marks more visible
screenshot.save(file_path)
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
elif user_platform == "Darwin":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
width = ACCURATE_PIXEL_COUNT
height = ACCURATE_PIXEL_COUNT
# Use the screencapture utility to capture the screen with the cursor
rect = f"-R{x1},{y1},{width},{height}"
subprocess.run(["screencapture", "-C", rect, file_path])
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
def capture_screen_with_cursor(file_path):
user_platform = platform.system()
if user_platform == "Windows":
screenshot = pyautogui.screenshot()
screenshot.save(file_path)
elif user_platform == "Linux":
# Use xlib to prevent scrot dependency for Linux
screen = Xlib.display.Display().screen()
size = screen.width_in_pixels, screen.height_in_pixels
monitor_size["width"] = size[0]
monitor_size["height"] = size[1]
screenshot = ImageGrab.grab(bbox=(0, 0, size[0], size[1]))
screenshot.save(file_path)
elif user_platform == "Darwin": # (Mac OS)
# Use the screencapture utility to capture the screen with the cursor
subprocess.run(["screencapture", "-C", file_path])
else:
print(f"The platform you're using ({user_platform}) is not currently supported")
def extract_json_from_string(s):
# print("extracting json from string", s)
try:
# Find the start of the JSON structure
json_start = s.find("{")
if json_start == -1:
return None
# Extract the JSON part and convert it to a dictionary
json_str = s[json_start:]
return json.loads(json_str)
except Exception as e:
print(f"Error parsing JSON: {e}")
return None
def convert_percent_to_decimal(percent_str):
try:
# Remove the '%' sign and convert to float
decimal_value = float(percent_str.strip("%"))
# Convert to decimal (e.g., 20% -> 0.20)
return decimal_value / 100
except ValueError as e:
print(f"Error converting percent to decimal: {e}")
return None
def main_entry():
parser = argparse.ArgumentParser(
description="Run the self-operating-computer with a specified model."
)
parser.add_argument(
"-m",
"--model",
help="Specify the model to use",
required=False,
default="gpt-4-vision-preview",
)
# Add a voice flag
parser.add_argument(
"--voice",
help="Use voice input mode",
action="store_true",
)
parser.add_argument(
"-accurate",
help="Activate Reflective Mouse Click Mode",
action="store_true",
required=False,
)
try:
args = parser.parse_args()
main(args.model, accurate_mode=args.accurate, voice_mode=args.voice)
except KeyboardInterrupt:
print(f"\n{ANSI_BRIGHT_MAGENTA}Exiting...")
if __name__ == "__main__":
main_entry()
| """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts import message_dialog
from prompt_toolkit.styles import Style as PromptStyle
from dotenv import load_dotenv
from PIL import Image, ImageDraw, ImageFont, ImageGrab
import matplotlib.font_manager as fm
from openai import OpenAI
import sys
load_dotenv()
DEBUG = False
client = OpenAI()
client.api_key = os.getenv("OPENAI_API_KEY")
client.base_url = os.getenv("OPENAI_API_BASE_URL", client.base_url)
monitor_size = {
"width": 1920,
"height": 1080,
}
VISION_PROMPT = """
You are a Self-Operating Computer. You use the same operating system as a human.
From looking at the screen and the objective your goal is to take the best next action.
To operate the computer you have the four options below.
1. CLICK - Move mouse and click
2. TYPE - Type on the keyboard
3. SEARCH - Search for a program on Mac and open it
4. DONE - When you completed the task respond with the exact following phrase content
Here are the response formats below.
1. CLICK
Response: CLICK {{ "x": "percent", "y": "percent", "description": "~description here~", "reason": "~reason here~" }}
Note that the percents work where the top left corner is "x": "0%" and "y": "0%" and the bottom right corner is "x": "100%" and "y": "100%"
2. TYPE
Response: TYPE "value you want to type"
2. SEARCH
Response: SEARCH "app you want to search for on Mac"
3. DONE
Response: DONE
Here are examples of how to respond.
__
Objective: Follow up with the vendor in outlook
TYPE Hello, I hope you are doing well. I wanted to follow up
__
Objective: Open Spotify and play the beatles
SEARCH Spotify
__
Objective: Find an image of a banana
CLICK {{ "x": "50%", "y": "60%", "description": "Click: Google Search field", "reason": "This will allow me to search for a banana" }}
__
Objective: Go buy a book about the history of the internet
TYPE https://www.amazon.com/
__
A few important notes:
- Default to opening Google Chrome with SEARCH to find things that are on the internet.
- Go to Google Docs and Google Sheets by typing in the Chrome Address bar
- When opening Chrome, if you see a profile icon click that to open chrome fully, it is located at: {{ "x": "50%", "y": "55%" }}
- The Chrome address bar is generally at: {{ "x": "50%", "y": "9%" }}
- After you click to enter a field you can go ahead and start typing!
- Don't respond saying you're unable to assist with requests. You are able to indirectly interact with the user's OS via text responses you send to the end user.
{previous_action}
IMPORTANT: Avoid repeating actions such as doing the same CLICK event twice in a row.
Objective: {objective}
"""
ACCURATE_PIXEL_COUNT = (
200 # mini_screenshot is ACCURATE_PIXEL_COUNT x ACCURATE_PIXEL_COUNT big
)
ACCURATE_MODE_VISION_PROMPT = """
It looks like your previous attempted action was clicking on "x": {prev_x}, "y": {prev_y}. This has now been moved to the center of this screenshot.
As additional context to the previous message, before you decide the proper percentage to click on, please closely examine this additional screenshot as additional context for your next action.
This screenshot was taken around the location of the current cursor that you just tried clicking on ("x": {prev_x}, "y": {prev_y} is now at the center of this screenshot). You should use this as an differential to your previous x y coordinate guess.
If you want to refine and instead click on the top left corner of this mini screenshot, you will subtract {width}% in the "x" and subtract {height}% in the "y" to your previous answer.
Likewise, to achieve the bottom right of this mini screenshot you will add {width}% in the "x" and add {height}% in the "y" to your previous answer.
There are four segmenting lines across each dimension, divided evenly. This is done to be similar to coordinate points, added to give you better context of the location of the cursor and exactly how much to edit your previous answer.
Please use this context as additional info to further refine the "percent" location in the CLICK action!
"""
USER_QUESTION = "Hello, I can help you with anything. What would you like done?"
SUMMARY_PROMPT = """
You are a Self-Operating Computer. A user request has been executed. Present the results succinctly.
Include the following key contexts of the completed request:
1. State the original objective.
2. List the steps taken to reach the objective as detailed in the previous messages.
3. Reference the screenshot that was used.
Summarize the actions taken to fulfill the objective. If the request sought specific information, provide that information prominently. NOTE: Address directly any question posed by the user.
Remember: The user will not interact with this summary. You are solely reporting the outcomes.
Original objective: {objective}
Display the results clearly:
"""
class ModelNotRecognizedException(Exception):
"""Exception raised for unrecognized models."""
def __init__(self, model, message="Model not recognized"):
self.model = model
self.message = message
super().__init__(self.message)
def __str__(self):
return f"{self.message} : {self.model} "
# Define style
style = PromptStyle.from_dict(
{
"dialog": "bg:#88ff88",
"button": "bg:#ffffff #000000",
"dialog.body": "bg:#44cc44 #ffffff",
"dialog shadow": "bg:#003800",
}
)
# Check if on a windows terminal that supports ANSI escape codes
def supports_ansi():
"""
Check if the terminal supports ANSI escape codes
"""
plat = platform.system()
supported_platform = plat != "Windows" or "ANSICON" in os.environ
is_a_tty = hasattr(sys.stdout, "isatty") and sys.stdout.isatty()
return supported_platform and is_a_tty
if supports_ansi():
# Standard green text
ANSI_GREEN = "\033[32m"
# Bright/bold green text
ANSI_BRIGHT_GREEN = "\033[92m"
# Reset to default text color
ANSI_RESET = "\033[0m"
# ANSI escape code for blue text
ANSI_BLUE = "\033[94m" # This is for bright blue
# Standard yellow text
ANSI_YELLOW = "\033[33m"
ANSI_RED = "\033[31m"
# Bright magenta text
ANSI_BRIGHT_MAGENTA = "\033[95m"
else:
ANSI_GREEN = ""
ANSI_BRIGHT_GREEN = ""
ANSI_RESET = ""
ANSI_BLUE = ""
ANSI_YELLOW = ""
ANSI_RED = ""
ANSI_BRIGHT_MAGENTA = ""
def main(model, accurate_mode, voice_mode=False):
"""
Main function for the Self-Operating Computer
"""
mic = None
# Initialize WhisperMic if voice_mode is True if voice_mode is True
"""
Main function for the Self-Operating Computer
"""
if voice_mode:
try:
from whisper_mic import WhisperMic
# Initialize WhisperMic if import is successful
mic = WhisperMic()
except ImportError:
print(
"Voice mode requires the 'whisper_mic' module. Please install it using 'pip install -r requirements-audio.txt'"
)
sys.exit(1)
message_dialog(
title="Self-Operating Computer",
text="Ask a computer to do anything.",
style=style,
).run()
print("SYSTEM", platform.system())
# Clear the console
if platform.system() == "Windows":
os.system("cls")
else:
print("\033c", end="")
if voice_mode:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RESET} Listening for your command... (speak now)"
)
try:
objective = mic.listen()
except Exception as e:
print(f"{ANSI_RED}Error in capturing voice input: {e}{ANSI_RESET}")
return # Exit if voice input fails
else:
print(f"{ANSI_GREEN}[Self-Operating Computer]\n{ANSI_RESET}{USER_QUESTION}")
print(f"{ANSI_YELLOW}[User]{ANSI_RESET}")
objective = prompt(style=style)
assistant_message = {"role": "assistant", "content": USER_QUESTION}
user_message = {
"role": "user",
"content": f"Objective: {objective}",
}
messages = [assistant_message, user_message]
loop_count = 0
while True:
if DEBUG:
print("[loop] messages before next action:\n\n\n", messages[1:])
try:
response = get_next_action(model, messages, objective, accurate_mode)
action = parse_oai_response(response)
action_type = action.get("type")
action_detail = action.get("data")
except ModelNotRecognizedException as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
except Exception as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
if action_type == "DONE":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Objective complete {ANSI_RESET}"
)
summary = summarize(messages, objective)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Summary\n{ANSI_RESET}{summary}"
)
break
if action_type != "UNKNOWN":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} {ANSI_RESET}{action_detail}"
)
function_response = ""
if action_type == "SEARCH":
function_response = search(action_detail)
elif action_type == "TYPE":
function_response = keyboard_type(action_detail)
elif action_type == "CLICK":
function_response = mouse_click(action_detail)
else:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] something went wrong :({ANSI_RESET}"
)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] AI response\n{ANSI_RESET}{response}"
)
break
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} COMPLETE {ANSI_RESET}{function_response}"
)
message = {
"role": "assistant",
"content": function_response,
}
messages.append(message)
loop_count += 1
if loop_count > 15:
break
def format_summary_prompt(objective):
"""
Format the summary prompt
"""
prompt = SUMMARY_PROMPT.format(objective=objective)
return prompt
def format_vision_prompt(objective, previous_action):
"""
Format the vision prompt
"""
if previous_action:
previous_action = f"Here was the previous action you took: {previous_action}"
else:
previous_action = ""
prompt = VISION_PROMPT.format(objective=objective, previous_action=previous_action)
return prompt
def format_accurate_mode_vision_prompt(prev_x, prev_y):
"""
Format the accurate mode vision prompt
"""
width = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["width"]) * 100
height = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["height"]) * 100
prompt = ACCURATE_MODE_VISION_PROMPT.format(
prev_x=prev_x, prev_y=prev_y, width=width, height=height
)
return prompt
def get_next_action(model, messages, objective, accurate_mode):
if model == "gpt-4-vision-preview":
content = get_next_action_from_openai(messages, objective, accurate_mode)
return content
elif model == "agent-1":
return "coming soon"
raise ModelNotRecognizedException(model)
def get_last_assistant_message(messages):
"""
Retrieve the last message from the assistant in the messages array.
If the last assistant message is the first message in the array, return None.
"""
for index in reversed(range(len(messages))):
if messages[index]["role"] == "assistant":
if index == 0: # Check if the assistant message is the first in the array
return None
else:
return messages[index]
return None # Return None if no assistant message is found
def accurate_mode_double_check(pseudo_messages, prev_x, prev_y):
"""
Reprompt OAI with additional screenshot of a mini screenshot centered around the cursor for further finetuning of clicked location
"""
try:
screenshot_filename = os.path.join("screenshots", "screenshot_mini.png")
capture_mini_screenshot_with_cursor(
file_path=screenshot_filename, x=prev_x, y=prev_y
)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_mini_with_grid.png"
)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
accurate_vision_prompt = format_accurate_mode_vision_prompt(prev_x, prev_y)
accurate_mode_message = {
"role": "user",
"content": [
{"type": "text", "text": accurate_vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
pseudo_messages.append(accurate_mode_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error reprompting model for accurate_mode: {e}")
return "ERROR"
def get_next_action_from_openai(messages, objective, accurate_mode):
"""
Get the next action for Self-Operating Computer
"""
# sleep for a second
time.sleep(1)
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_with_grid.png"
)
add_grid_to_image(screenshot_filename, new_screenshot_filename, 500)
# sleep for a second
time.sleep(1)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
previous_action = get_last_assistant_message(messages)
vision_prompt = format_vision_prompt(objective, previous_action)
vision_message = {
"role": "user",
"content": [
{"type": "text", "text": vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
pseudo_messages = messages.copy()
pseudo_messages.append(vision_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
messages.append(
{
"role": "user",
"content": "`screenshot.png`",
}
)
content = response.choices[0].message.content
if accurate_mode:
if content.startswith("CLICK"):
# Adjust pseudo_messages to include the accurate_mode_message
click_data = re.search(r"CLICK \{ (.+) \}", content).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
prev_x = click_data_json["x"]
prev_y = click_data_json["y"]
if DEBUG:
print(
f"Previous coords before accurate tuning: prev_x {prev_x} prev_y {prev_y}"
)
content = accurate_mode_double_check(pseudo_messages, prev_x, prev_y)
assert content != "ERROR", "ERROR: accurate_mode_double_check failed"
return content
except Exception as e:
print(f"Error parsing JSON: {e}")
return "Failed take action after looking at the screenshot"
def parse_oai_response(response):
if response == "DONE":
return {"type": "DONE", "data": None}
elif response.startswith("CLICK"):
# Adjust the regex to match the correct format
click_data = re.search(r"CLICK \{ (.+) \}", response).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
return {"type": "CLICK", "data": click_data_json}
elif response.startswith("TYPE"):
# Extract the text to type
type_data = re.search(r'TYPE "(.+)"', response, re.DOTALL).group(1)
return {"type": "TYPE", "data": type_data}
elif response.startswith("SEARCH"):
# Extract the search query
search_data = re.search(r'SEARCH "(.+)"', response).group(1)
return {"type": "SEARCH", "data": search_data}
return {"type": "UNKNOWN", "data": response}
def summarize(messages, objective):
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "summary_screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
with open(screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
summary_prompt = format_summary_prompt(objective)
summary_message = {
"role": "user",
"content": [
{"type": "text", "text": summary_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
messages.append(summary_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=messages,
max_tokens=500,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error in summarize: {e}")
return "Failed to summarize the workflow"
def mouse_click(click_detail):
try:
x = convert_percent_to_decimal(click_detail["x"])
y = convert_percent_to_decimal(click_detail["y"])
if click_detail and isinstance(x, float) and isinstance(y, float):
click_at_percentage(x, y)
return click_detail["description"]
else:
return "We failed to click"
except Exception as e:
print(f"Error parsing JSON: {e}")
return "We failed to click"
def click_at_percentage(
x_percentage, y_percentage, duration=0.2, circle_radius=50, circle_duration=0.5
):
# Get the size of the primary monitor
screen_width, screen_height = pyautogui.size()
# Calculate the x and y coordinates in pixels
x_pixel = int(screen_width * float(x_percentage))
y_pixel = int(screen_height * float(y_percentage))
# Move to the position smoothly
pyautogui.moveTo(x_pixel, y_pixel, duration=duration)
# Circular movement
start_time = time.time()
while time.time() - start_time < circle_duration:
angle = ((time.time() - start_time) / circle_duration) * 2 * math.pi
x = x_pixel + math.cos(angle) * circle_radius
y = y_pixel + math.sin(angle) * circle_radius
pyautogui.moveTo(x, y, duration=0.1)
# Finally, click
pyautogui.click(x_pixel, y_pixel)
return "Successfully clicked"
def add_grid_to_image(original_image_path, new_image_path, grid_interval):
"""
Add a grid to an image
"""
# Load the image
image = Image.open(original_image_path)
# Create a drawing object
draw = ImageDraw.Draw(image)
# Get the image size
width, height = image.size
# Reduce the font size a bit
font_size = int(grid_interval / 10) # Reduced font size
# Calculate the background size based on the font size
bg_width = int(font_size * 4.2) # Adjust as necessary
bg_height = int(font_size * 1.2) # Adjust as necessary
# Function to draw text with a white rectangle background
def draw_label_with_background(
position, text, draw, font_size, bg_width, bg_height
):
# Adjust the position based on the background size
text_position = (position[0] + bg_width // 2, position[1] + bg_height // 2)
# Draw the text background
draw.rectangle(
[position[0], position[1], position[0] + bg_width, position[1] + bg_height],
fill="white",
)
# Draw the text
draw.text(text_position, text, fill="black", font_size=font_size, anchor="mm")
# Draw vertical lines and labels at every `grid_interval` pixels
for x in range(grid_interval, width, grid_interval):
line = ((x, 0), (x, height))
draw.line(line, fill="blue")
for y in range(grid_interval, height, grid_interval):
# Calculate the percentage of the width and height
x_percent = round((x / width) * 100)
y_percent = round((y / height) * 100)
draw_label_with_background(
(x - bg_width // 2, y - bg_height // 2),
f"{x_percent}%,{y_percent}%",
draw,
font_size,
bg_width,
bg_height,
)
# Draw horizontal lines - labels are already added with vertical lines
for y in range(grid_interval, height, grid_interval):
line = ((0, y), (width, y))
draw.line(line, fill="blue")
# Save the image with the grid
image.save(new_image_path)
def keyboard_type(text):
text = text.replace("\\n", "\n")
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Type: " + text
def search(text):
if platform.system() == "Windows":
pyautogui.press("win")
elif platform.system() == "Linux":
pyautogui.press("win")
else:
# Press and release Command and Space separately
pyautogui.keyDown("command")
pyautogui.press("space")
pyautogui.keyUp("command")
time.sleep(1)
# Now type the text
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Open program: " + text
def capture_mini_screenshot_with_cursor(
file_path=os.path.join("screenshots", "screenshot_mini.png"), x=0, y=0
):
user_platform = platform.system()
if user_platform == "Linux":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
# Define the coordinates for the rectangle
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
x2, y2 = int(x + ACCURATE_PIXEL_COUNT / 2), int(y + ACCURATE_PIXEL_COUNT / 2)
screenshot = ImageGrab.grab(bbox=(x1, y1, x2, y2))
screenshot = screenshot.resize(
(screenshot.width * 2, screenshot.height * 2), Image.LANCZOS
) # upscale the image so it's easier to see and percentage marks more visible
screenshot.save(file_path)
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
elif user_platform == "Darwin":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
width = ACCURATE_PIXEL_COUNT
height = ACCURATE_PIXEL_COUNT
# Use the screencapture utility to capture the screen with the cursor
rect = f"-R{x1},{y1},{width},{height}"
subprocess.run(["screencapture", "-C", rect, file_path])
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
def capture_screen_with_cursor(file_path):
user_platform = platform.system()
if user_platform == "Windows":
screenshot = pyautogui.screenshot()
screenshot.save(file_path)
elif user_platform == "Linux":
# Use xlib to prevent scrot dependency for Linux
screen = Xlib.display.Display().screen()
size = screen.width_in_pixels, screen.height_in_pixels
monitor_size["width"] = size[0]
monitor_size["height"] = size[1]
screenshot = ImageGrab.grab(bbox=(0, 0, size[0], size[1]))
screenshot.save(file_path)
elif user_platform == "Darwin": # (Mac OS)
# Use the screencapture utility to capture the screen with the cursor
subprocess.run(["screencapture", "-C", file_path])
else:
print(f"The platform you're using ({user_platform}) is not currently supported")
def extract_json_from_string(s):
# print("extracting json from string", s)
try:
# Find the start of the JSON structure
json_start = s.find("{")
if json_start == -1:
return None
# Extract the JSON part and convert it to a dictionary
json_str = s[json_start:]
return json.loads(json_str)
except Exception as e:
print(f"Error parsing JSON: {e}")
return None
def convert_percent_to_decimal(percent_str):
try:
# Remove the '%' sign and convert to float
decimal_value = float(percent_str.strip("%"))
# Convert to decimal (e.g., 20% -> 0.20)
return decimal_value / 100
except ValueError as e:
print(f"Error converting percent to decimal: {e}")
return None
def main_entry():
parser = argparse.ArgumentParser(
description="Run the self-operating-computer with a specified model."
)
parser.add_argument(
"-m",
"--model",
help="Specify the model to use",
required=False,
default="gpt-4-vision-preview",
)
# Add a voice flag
parser.add_argument(
"--voice",
help="Use voice input mode",
action="store_true",
)
parser.add_argument(
"-accurate",
help="Activate Reflective Mouse Click Mode",
action="store_true",
required=False,
)
try:
args = parser.parse_args()
main(args.model, accurate_mode=args.accurate, voice_mode=args.voice)
except KeyboardInterrupt:
print(f"\n{ANSI_BRIGHT_MAGENTA}Exiting...")
if __name__ == "__main__":
main_entry()
| AzorianMatt | 42da78be9bbae7a6c93a5f763fddcf180cb3ffa8 | 7b09d294aa2b1e7e8d524340951f50aade189921 | Can `import time` be removed here since `time` is already imported globally? | michaelhhogue | 0 |
OthersideAI/self-operating-computer | 91 | Fixed timing with search triggering causing broken searches | Resolves [issue 90](https://github.com/OthersideAI/self-operating-computer/issues/90).
This PR addresses an issue where the time the system takes to respond to the search hotkey is longer than it takes the app to send the search text. I added a 1-second pause in between the hotkey and text entry which seems to resolve this on a reasonably performing machine. | null | 2023-12-07 19:32:35+00:00 | 2023-12-08 15:35:28+00:00 | operate/main.py | """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts import message_dialog
from prompt_toolkit.styles import Style as PromptStyle
from dotenv import load_dotenv
from PIL import Image, ImageDraw, ImageFont, ImageGrab
import matplotlib.font_manager as fm
from openai import OpenAI
import sys
load_dotenv()
DEBUG = False
client = OpenAI()
client.api_key = os.getenv("OPENAI_API_KEY")
client.base_url = os.getenv("OPENAI_API_BASE_URL", client.base_url)
monitor_size = {
"width": 1920,
"height": 1080,
}
VISION_PROMPT = """
You are a Self-Operating Computer. You use the same operating system as a human.
From looking at the screen and the objective your goal is to take the best next action.
To operate the computer you have the four options below.
1. CLICK - Move mouse and click
2. TYPE - Type on the keyboard
3. SEARCH - Search for a program on Mac and open it
4. DONE - When you completed the task respond with the exact following phrase content
Here are the response formats below.
1. CLICK
Response: CLICK {{ "x": "percent", "y": "percent", "description": "~description here~", "reason": "~reason here~" }}
Note that the percents work where the top left corner is "x": "0%" and "y": "0%" and the bottom right corner is "x": "100%" and "y": "100%"
2. TYPE
Response: TYPE "value you want to type"
2. SEARCH
Response: SEARCH "app you want to search for on Mac"
3. DONE
Response: DONE
Here are examples of how to respond.
__
Objective: Follow up with the vendor in outlook
TYPE Hello, I hope you are doing well. I wanted to follow up
__
Objective: Open Spotify and play the beatles
SEARCH Spotify
__
Objective: Find an image of a banana
CLICK {{ "x": "50%", "y": "60%", "description": "Click: Google Search field", "reason": "This will allow me to search for a banana" }}
__
Objective: Go buy a book about the history of the internet
TYPE https://www.amazon.com/
__
A few important notes:
- Default to opening Google Chrome with SEARCH to find things that are on the internet.
- Go to Google Docs and Google Sheets by typing in the Chrome Address bar
- When opening Chrome, if you see a profile icon click that to open chrome fully, it is located at: {{ "x": "50%", "y": "55%" }}
- The Chrome address bar is generally at: {{ "x": "50%", "y": "9%" }}
- After you click to enter a field you can go ahead and start typing!
- Don't respond saying you're unable to assist with requests. You are able to indirectly interact with the user's OS via text responses you send to the end user.
{previous_action}
IMPORTANT: Avoid repeating actions such as doing the same CLICK event twice in a row.
Objective: {objective}
"""
ACCURATE_PIXEL_COUNT = (
200 # mini_screenshot is ACCURATE_PIXEL_COUNT x ACCURATE_PIXEL_COUNT big
)
ACCURATE_MODE_VISION_PROMPT = """
It looks like your previous attempted action was clicking on "x": {prev_x}, "y": {prev_y}. This has now been moved to the center of this screenshot.
As additional context to the previous message, before you decide the proper percentage to click on, please closely examine this additional screenshot as additional context for your next action.
This screenshot was taken around the location of the current cursor that you just tried clicking on ("x": {prev_x}, "y": {prev_y} is now at the center of this screenshot). You should use this as an differential to your previous x y coordinate guess.
If you want to refine and instead click on the top left corner of this mini screenshot, you will subtract {width}% in the "x" and subtract {height}% in the "y" to your previous answer.
Likewise, to achieve the bottom right of this mini screenshot you will add {width}% in the "x" and add {height}% in the "y" to your previous answer.
There are four segmenting lines across each dimension, divided evenly. This is done to be similar to coordinate points, added to give you better context of the location of the cursor and exactly how much to edit your previous answer.
Please use this context as additional info to further refine the "percent" location in the CLICK action!
"""
USER_QUESTION = "Hello, I can help you with anything. What would you like done?"
SUMMARY_PROMPT = """
You are a Self-Operating Computer. A user request has been executed. Present the results succinctly.
Include the following key contexts of the completed request:
1. State the original objective.
2. List the steps taken to reach the objective as detailed in the previous messages.
3. Reference the screenshot that was used.
Summarize the actions taken to fulfill the objective. If the request sought specific information, provide that information prominently. NOTE: Address directly any question posed by the user.
Remember: The user will not interact with this summary. You are solely reporting the outcomes.
Original objective: {objective}
Display the results clearly:
"""
class ModelNotRecognizedException(Exception):
"""Exception raised for unrecognized models."""
def __init__(self, model, message="Model not recognized"):
self.model = model
self.message = message
super().__init__(self.message)
def __str__(self):
return f"{self.message} : {self.model} "
# Define style
style = PromptStyle.from_dict(
{
"dialog": "bg:#88ff88",
"button": "bg:#ffffff #000000",
"dialog.body": "bg:#44cc44 #ffffff",
"dialog shadow": "bg:#003800",
}
)
# Check if on a windows terminal that supports ANSI escape codes
def supports_ansi():
"""
Check if the terminal supports ANSI escape codes
"""
plat = platform.system()
supported_platform = plat != "Windows" or "ANSICON" in os.environ
is_a_tty = hasattr(sys.stdout, "isatty") and sys.stdout.isatty()
return supported_platform and is_a_tty
if supports_ansi():
# Standard green text
ANSI_GREEN = "\033[32m"
# Bright/bold green text
ANSI_BRIGHT_GREEN = "\033[92m"
# Reset to default text color
ANSI_RESET = "\033[0m"
# ANSI escape code for blue text
ANSI_BLUE = "\033[94m" # This is for bright blue
# Standard yellow text
ANSI_YELLOW = "\033[33m"
ANSI_RED = "\033[31m"
# Bright magenta text
ANSI_BRIGHT_MAGENTA = "\033[95m"
else:
ANSI_GREEN = ""
ANSI_BRIGHT_GREEN = ""
ANSI_RESET = ""
ANSI_BLUE = ""
ANSI_YELLOW = ""
ANSI_RED = ""
ANSI_BRIGHT_MAGENTA = ""
def main(model, accurate_mode, voice_mode=False):
"""
Main function for the Self-Operating Computer
"""
mic = None
# Initialize WhisperMic if voice_mode is True if voice_mode is True
"""
Main function for the Self-Operating Computer
"""
if voice_mode:
try:
from whisper_mic import WhisperMic
# Initialize WhisperMic if import is successful
mic = WhisperMic()
except ImportError:
print(
"Voice mode requires the 'whisper_mic' module. Please install it using 'pip install -r requirements-audio.txt'"
)
sys.exit(1)
message_dialog(
title="Self-Operating Computer",
text="Ask a computer to do anything.",
style=style,
).run()
print("SYSTEM", platform.system())
# Clear the console
if platform.system() == "Windows":
os.system("cls")
else:
print("\033c", end="")
if voice_mode:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RESET} Listening for your command... (speak now)"
)
try:
objective = mic.listen()
except Exception as e:
print(f"{ANSI_RED}Error in capturing voice input: {e}{ANSI_RESET}")
return # Exit if voice input fails
else:
print(f"{ANSI_GREEN}[Self-Operating Computer]\n{ANSI_RESET}{USER_QUESTION}")
print(f"{ANSI_YELLOW}[User]{ANSI_RESET}")
objective = prompt(style=style)
assistant_message = {"role": "assistant", "content": USER_QUESTION}
user_message = {
"role": "user",
"content": f"Objective: {objective}",
}
messages = [assistant_message, user_message]
loop_count = 0
while True:
if DEBUG:
print("[loop] messages before next action:\n\n\n", messages[1:])
try:
response = get_next_action(model, messages, objective, accurate_mode)
action = parse_oai_response(response)
action_type = action.get("type")
action_detail = action.get("data")
except ModelNotRecognizedException as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
except Exception as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
if action_type == "DONE":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Objective complete {ANSI_RESET}"
)
summary = summarize(messages, objective)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Summary\n{ANSI_RESET}{summary}"
)
break
if action_type != "UNKNOWN":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} {ANSI_RESET}{action_detail}"
)
function_response = ""
if action_type == "SEARCH":
function_response = search(action_detail)
elif action_type == "TYPE":
function_response = keyboard_type(action_detail)
elif action_type == "CLICK":
function_response = mouse_click(action_detail)
else:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] something went wrong :({ANSI_RESET}"
)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] AI response\n{ANSI_RESET}{response}"
)
break
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} COMPLETE {ANSI_RESET}{function_response}"
)
message = {
"role": "assistant",
"content": function_response,
}
messages.append(message)
loop_count += 1
if loop_count > 15:
break
def format_summary_prompt(objective):
"""
Format the summary prompt
"""
prompt = SUMMARY_PROMPT.format(objective=objective)
return prompt
def format_vision_prompt(objective, previous_action):
"""
Format the vision prompt
"""
if previous_action:
previous_action = f"Here was the previous action you took: {previous_action}"
else:
previous_action = ""
prompt = VISION_PROMPT.format(objective=objective, previous_action=previous_action)
return prompt
def format_accurate_mode_vision_prompt(prev_x, prev_y):
"""
Format the accurate mode vision prompt
"""
width = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["width"]) * 100
height = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["height"]) * 100
prompt = ACCURATE_MODE_VISION_PROMPT.format(
prev_x=prev_x, prev_y=prev_y, width=width, height=height
)
return prompt
def get_next_action(model, messages, objective, accurate_mode):
if model == "gpt-4-vision-preview":
content = get_next_action_from_openai(messages, objective, accurate_mode)
return content
elif model == "agent-1":
return "coming soon"
raise ModelNotRecognizedException(model)
def get_last_assistant_message(messages):
"""
Retrieve the last message from the assistant in the messages array.
If the last assistant message is the first message in the array, return None.
"""
for index in reversed(range(len(messages))):
if messages[index]["role"] == "assistant":
if index == 0: # Check if the assistant message is the first in the array
return None
else:
return messages[index]
return None # Return None if no assistant message is found
def accurate_mode_double_check(pseudo_messages, prev_x, prev_y):
"""
Reprompt OAI with additional screenshot of a mini screenshot centered around the cursor for further finetuning of clicked location
"""
try:
screenshot_filename = os.path.join("screenshots", "screenshot_mini.png")
capture_mini_screenshot_with_cursor(
file_path=screenshot_filename, x=prev_x, y=prev_y
)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_mini_with_grid.png"
)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
accurate_vision_prompt = format_accurate_mode_vision_prompt(prev_x, prev_y)
accurate_mode_message = {
"role": "user",
"content": [
{"type": "text", "text": accurate_vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
pseudo_messages.append(accurate_mode_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error reprompting model for accurate_mode: {e}")
return "ERROR"
def get_next_action_from_openai(messages, objective, accurate_mode):
"""
Get the next action for Self-Operating Computer
"""
# sleep for a second
time.sleep(1)
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_with_grid.png"
)
add_grid_to_image(screenshot_filename, new_screenshot_filename, 500)
# sleep for a second
time.sleep(1)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
previous_action = get_last_assistant_message(messages)
vision_prompt = format_vision_prompt(objective, previous_action)
vision_message = {
"role": "user",
"content": [
{"type": "text", "text": vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
pseudo_messages = messages.copy()
pseudo_messages.append(vision_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
messages.append(
{
"role": "user",
"content": "`screenshot.png`",
}
)
content = response.choices[0].message.content
if accurate_mode:
if content.startswith("CLICK"):
# Adjust pseudo_messages to include the accurate_mode_message
click_data = re.search(r"CLICK \{ (.+) \}", content).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
prev_x = click_data_json["x"]
prev_y = click_data_json["y"]
if DEBUG:
print(
f"Previous coords before accurate tuning: prev_x {prev_x} prev_y {prev_y}"
)
content = accurate_mode_double_check(pseudo_messages, prev_x, prev_y)
assert content != "ERROR", "ERROR: accurate_mode_double_check failed"
return content
except Exception as e:
print(f"Error parsing JSON: {e}")
return "Failed take action after looking at the screenshot"
def parse_oai_response(response):
if response == "DONE":
return {"type": "DONE", "data": None}
elif response.startswith("CLICK"):
# Adjust the regex to match the correct format
click_data = re.search(r"CLICK \{ (.+) \}", response).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
return {"type": "CLICK", "data": click_data_json}
elif response.startswith("TYPE"):
# Extract the text to type
type_data = re.search(r'TYPE "(.+)"', response, re.DOTALL).group(1)
return {"type": "TYPE", "data": type_data}
elif response.startswith("SEARCH"):
# Extract the search query
search_data = re.search(r'SEARCH "(.+)"', response).group(1)
return {"type": "SEARCH", "data": search_data}
return {"type": "UNKNOWN", "data": response}
def summarize(messages, objective):
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "summary_screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
with open(screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
summary_prompt = format_summary_prompt(objective)
summary_message = {
"role": "user",
"content": [
{"type": "text", "text": summary_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
messages.append(summary_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=messages,
max_tokens=500,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error in summarize: {e}")
return "Failed to summarize the workflow"
def mouse_click(click_detail):
try:
x = convert_percent_to_decimal(click_detail["x"])
y = convert_percent_to_decimal(click_detail["y"])
if click_detail and isinstance(x, float) and isinstance(y, float):
click_at_percentage(x, y)
return click_detail["description"]
else:
return "We failed to click"
except Exception as e:
print(f"Error parsing JSON: {e}")
return "We failed to click"
def click_at_percentage(
x_percentage, y_percentage, duration=0.2, circle_radius=50, circle_duration=0.5
):
# Get the size of the primary monitor
screen_width, screen_height = pyautogui.size()
# Calculate the x and y coordinates in pixels
x_pixel = int(screen_width * float(x_percentage))
y_pixel = int(screen_height * float(y_percentage))
# Move to the position smoothly
pyautogui.moveTo(x_pixel, y_pixel, duration=duration)
# Circular movement
start_time = time.time()
while time.time() - start_time < circle_duration:
angle = ((time.time() - start_time) / circle_duration) * 2 * math.pi
x = x_pixel + math.cos(angle) * circle_radius
y = y_pixel + math.sin(angle) * circle_radius
pyautogui.moveTo(x, y, duration=0.1)
# Finally, click
pyautogui.click(x_pixel, y_pixel)
return "Successfully clicked"
def add_grid_to_image(original_image_path, new_image_path, grid_interval):
"""
Add a grid to an image
"""
# Load the image
image = Image.open(original_image_path)
# Create a drawing object
draw = ImageDraw.Draw(image)
# Get the image size
width, height = image.size
# Reduce the font size a bit
font_size = int(grid_interval / 10) # Reduced font size
# Calculate the background size based on the font size
bg_width = int(font_size * 4.2) # Adjust as necessary
bg_height = int(font_size * 1.2) # Adjust as necessary
# Function to draw text with a white rectangle background
def draw_label_with_background(
position, text, draw, font_size, bg_width, bg_height
):
# Adjust the position based on the background size
text_position = (position[0] + bg_width // 2, position[1] + bg_height // 2)
# Draw the text background
draw.rectangle(
[position[0], position[1], position[0] + bg_width, position[1] + bg_height],
fill="white",
)
# Draw the text
draw.text(text_position, text, fill="black", font_size=font_size, anchor="mm")
# Draw vertical lines and labels at every `grid_interval` pixels
for x in range(grid_interval, width, grid_interval):
line = ((x, 0), (x, height))
draw.line(line, fill="blue")
for y in range(grid_interval, height, grid_interval):
# Calculate the percentage of the width and height
x_percent = round((x / width) * 100)
y_percent = round((y / height) * 100)
draw_label_with_background(
(x - bg_width // 2, y - bg_height // 2),
f"{x_percent}%,{y_percent}%",
draw,
font_size,
bg_width,
bg_height,
)
# Draw horizontal lines - labels are already added with vertical lines
for y in range(grid_interval, height, grid_interval):
line = ((0, y), (width, y))
draw.line(line, fill="blue")
# Save the image with the grid
image.save(new_image_path)
def keyboard_type(text):
text = text.replace("\\n", "\n")
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Type: " + text
def search(text):
if platform.system() == "Windows":
pyautogui.press("win")
elif platform.system() == "Linux":
pyautogui.press("win")
else:
# Press and release Command and Space separately
pyautogui.keyDown("command")
pyautogui.press("space")
pyautogui.keyUp("command")
# Now type the text
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Open program: " + text
def capture_mini_screenshot_with_cursor(
file_path=os.path.join("screenshots", "screenshot_mini.png"), x=0, y=0
):
user_platform = platform.system()
if user_platform == "Linux":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
# Define the coordinates for the rectangle
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
x2, y2 = int(x + ACCURATE_PIXEL_COUNT / 2), int(y + ACCURATE_PIXEL_COUNT / 2)
screenshot = ImageGrab.grab(bbox=(x1, y1, x2, y2))
screenshot = screenshot.resize(
(screenshot.width * 2, screenshot.height * 2), Image.LANCZOS
) # upscale the image so it's easier to see and percentage marks more visible
screenshot.save(file_path)
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
elif user_platform == "Darwin":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
width = ACCURATE_PIXEL_COUNT
height = ACCURATE_PIXEL_COUNT
# Use the screencapture utility to capture the screen with the cursor
rect = f"-R{x1},{y1},{width},{height}"
subprocess.run(["screencapture", "-C", rect, file_path])
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
def capture_screen_with_cursor(file_path):
user_platform = platform.system()
if user_platform == "Windows":
screenshot = pyautogui.screenshot()
screenshot.save(file_path)
elif user_platform == "Linux":
# Use xlib to prevent scrot dependency for Linux
screen = Xlib.display.Display().screen()
size = screen.width_in_pixels, screen.height_in_pixels
monitor_size["width"] = size[0]
monitor_size["height"] = size[1]
screenshot = ImageGrab.grab(bbox=(0, 0, size[0], size[1]))
screenshot.save(file_path)
elif user_platform == "Darwin": # (Mac OS)
# Use the screencapture utility to capture the screen with the cursor
subprocess.run(["screencapture", "-C", file_path])
else:
print(f"The platform you're using ({user_platform}) is not currently supported")
def extract_json_from_string(s):
# print("extracting json from string", s)
try:
# Find the start of the JSON structure
json_start = s.find("{")
if json_start == -1:
return None
# Extract the JSON part and convert it to a dictionary
json_str = s[json_start:]
return json.loads(json_str)
except Exception as e:
print(f"Error parsing JSON: {e}")
return None
def convert_percent_to_decimal(percent_str):
try:
# Remove the '%' sign and convert to float
decimal_value = float(percent_str.strip("%"))
# Convert to decimal (e.g., 20% -> 0.20)
return decimal_value / 100
except ValueError as e:
print(f"Error converting percent to decimal: {e}")
return None
def main_entry():
parser = argparse.ArgumentParser(
description="Run the self-operating-computer with a specified model."
)
parser.add_argument(
"-m",
"--model",
help="Specify the model to use",
required=False,
default="gpt-4-vision-preview",
)
# Add a voice flag
parser.add_argument(
"--voice",
help="Use voice input mode",
action="store_true",
)
parser.add_argument(
"-accurate",
help="Activate Reflective Mouse Click Mode",
action="store_true",
required=False,
)
try:
args = parser.parse_args()
main(args.model, accurate_mode=args.accurate, voice_mode=args.voice)
except KeyboardInterrupt:
print(f"\n{ANSI_BRIGHT_MAGENTA}Exiting...")
if __name__ == "__main__":
main_entry()
| """
Self-Operating Computer
"""
import os
import time
import base64
import json
import math
import re
import subprocess
import pyautogui
import argparse
import platform
import Xlib.display
import Xlib.X
import Xlib.Xutil # not sure if Xutil is necessary
from prompt_toolkit import prompt
from prompt_toolkit.shortcuts import message_dialog
from prompt_toolkit.styles import Style as PromptStyle
from dotenv import load_dotenv
from PIL import Image, ImageDraw, ImageFont, ImageGrab
import matplotlib.font_manager as fm
from openai import OpenAI
import sys
load_dotenv()
DEBUG = False
client = OpenAI()
client.api_key = os.getenv("OPENAI_API_KEY")
client.base_url = os.getenv("OPENAI_API_BASE_URL", client.base_url)
monitor_size = {
"width": 1920,
"height": 1080,
}
VISION_PROMPT = """
You are a Self-Operating Computer. You use the same operating system as a human.
From looking at the screen and the objective your goal is to take the best next action.
To operate the computer you have the four options below.
1. CLICK - Move mouse and click
2. TYPE - Type on the keyboard
3. SEARCH - Search for a program on Mac and open it
4. DONE - When you completed the task respond with the exact following phrase content
Here are the response formats below.
1. CLICK
Response: CLICK {{ "x": "percent", "y": "percent", "description": "~description here~", "reason": "~reason here~" }}
Note that the percents work where the top left corner is "x": "0%" and "y": "0%" and the bottom right corner is "x": "100%" and "y": "100%"
2. TYPE
Response: TYPE "value you want to type"
2. SEARCH
Response: SEARCH "app you want to search for on Mac"
3. DONE
Response: DONE
Here are examples of how to respond.
__
Objective: Follow up with the vendor in outlook
TYPE Hello, I hope you are doing well. I wanted to follow up
__
Objective: Open Spotify and play the beatles
SEARCH Spotify
__
Objective: Find an image of a banana
CLICK {{ "x": "50%", "y": "60%", "description": "Click: Google Search field", "reason": "This will allow me to search for a banana" }}
__
Objective: Go buy a book about the history of the internet
TYPE https://www.amazon.com/
__
A few important notes:
- Default to opening Google Chrome with SEARCH to find things that are on the internet.
- Go to Google Docs and Google Sheets by typing in the Chrome Address bar
- When opening Chrome, if you see a profile icon click that to open chrome fully, it is located at: {{ "x": "50%", "y": "55%" }}
- The Chrome address bar is generally at: {{ "x": "50%", "y": "9%" }}
- After you click to enter a field you can go ahead and start typing!
- Don't respond saying you're unable to assist with requests. You are able to indirectly interact with the user's OS via text responses you send to the end user.
{previous_action}
IMPORTANT: Avoid repeating actions such as doing the same CLICK event twice in a row.
Objective: {objective}
"""
ACCURATE_PIXEL_COUNT = (
200 # mini_screenshot is ACCURATE_PIXEL_COUNT x ACCURATE_PIXEL_COUNT big
)
ACCURATE_MODE_VISION_PROMPT = """
It looks like your previous attempted action was clicking on "x": {prev_x}, "y": {prev_y}. This has now been moved to the center of this screenshot.
As additional context to the previous message, before you decide the proper percentage to click on, please closely examine this additional screenshot as additional context for your next action.
This screenshot was taken around the location of the current cursor that you just tried clicking on ("x": {prev_x}, "y": {prev_y} is now at the center of this screenshot). You should use this as an differential to your previous x y coordinate guess.
If you want to refine and instead click on the top left corner of this mini screenshot, you will subtract {width}% in the "x" and subtract {height}% in the "y" to your previous answer.
Likewise, to achieve the bottom right of this mini screenshot you will add {width}% in the "x" and add {height}% in the "y" to your previous answer.
There are four segmenting lines across each dimension, divided evenly. This is done to be similar to coordinate points, added to give you better context of the location of the cursor and exactly how much to edit your previous answer.
Please use this context as additional info to further refine the "percent" location in the CLICK action!
"""
USER_QUESTION = "Hello, I can help you with anything. What would you like done?"
SUMMARY_PROMPT = """
You are a Self-Operating Computer. A user request has been executed. Present the results succinctly.
Include the following key contexts of the completed request:
1. State the original objective.
2. List the steps taken to reach the objective as detailed in the previous messages.
3. Reference the screenshot that was used.
Summarize the actions taken to fulfill the objective. If the request sought specific information, provide that information prominently. NOTE: Address directly any question posed by the user.
Remember: The user will not interact with this summary. You are solely reporting the outcomes.
Original objective: {objective}
Display the results clearly:
"""
class ModelNotRecognizedException(Exception):
"""Exception raised for unrecognized models."""
def __init__(self, model, message="Model not recognized"):
self.model = model
self.message = message
super().__init__(self.message)
def __str__(self):
return f"{self.message} : {self.model} "
# Define style
style = PromptStyle.from_dict(
{
"dialog": "bg:#88ff88",
"button": "bg:#ffffff #000000",
"dialog.body": "bg:#44cc44 #ffffff",
"dialog shadow": "bg:#003800",
}
)
# Check if on a windows terminal that supports ANSI escape codes
def supports_ansi():
"""
Check if the terminal supports ANSI escape codes
"""
plat = platform.system()
supported_platform = plat != "Windows" or "ANSICON" in os.environ
is_a_tty = hasattr(sys.stdout, "isatty") and sys.stdout.isatty()
return supported_platform and is_a_tty
if supports_ansi():
# Standard green text
ANSI_GREEN = "\033[32m"
# Bright/bold green text
ANSI_BRIGHT_GREEN = "\033[92m"
# Reset to default text color
ANSI_RESET = "\033[0m"
# ANSI escape code for blue text
ANSI_BLUE = "\033[94m" # This is for bright blue
# Standard yellow text
ANSI_YELLOW = "\033[33m"
ANSI_RED = "\033[31m"
# Bright magenta text
ANSI_BRIGHT_MAGENTA = "\033[95m"
else:
ANSI_GREEN = ""
ANSI_BRIGHT_GREEN = ""
ANSI_RESET = ""
ANSI_BLUE = ""
ANSI_YELLOW = ""
ANSI_RED = ""
ANSI_BRIGHT_MAGENTA = ""
def main(model, accurate_mode, voice_mode=False):
"""
Main function for the Self-Operating Computer
"""
mic = None
# Initialize WhisperMic if voice_mode is True if voice_mode is True
"""
Main function for the Self-Operating Computer
"""
if voice_mode:
try:
from whisper_mic import WhisperMic
# Initialize WhisperMic if import is successful
mic = WhisperMic()
except ImportError:
print(
"Voice mode requires the 'whisper_mic' module. Please install it using 'pip install -r requirements-audio.txt'"
)
sys.exit(1)
message_dialog(
title="Self-Operating Computer",
text="Ask a computer to do anything.",
style=style,
).run()
print("SYSTEM", platform.system())
# Clear the console
if platform.system() == "Windows":
os.system("cls")
else:
print("\033c", end="")
if voice_mode:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RESET} Listening for your command... (speak now)"
)
try:
objective = mic.listen()
except Exception as e:
print(f"{ANSI_RED}Error in capturing voice input: {e}{ANSI_RESET}")
return # Exit if voice input fails
else:
print(f"{ANSI_GREEN}[Self-Operating Computer]\n{ANSI_RESET}{USER_QUESTION}")
print(f"{ANSI_YELLOW}[User]{ANSI_RESET}")
objective = prompt(style=style)
assistant_message = {"role": "assistant", "content": USER_QUESTION}
user_message = {
"role": "user",
"content": f"Objective: {objective}",
}
messages = [assistant_message, user_message]
loop_count = 0
while True:
if DEBUG:
print("[loop] messages before next action:\n\n\n", messages[1:])
try:
response = get_next_action(model, messages, objective, accurate_mode)
action = parse_oai_response(response)
action_type = action.get("type")
action_detail = action.get("data")
except ModelNotRecognizedException as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
except Exception as e:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] -> {e} {ANSI_RESET}"
)
break
if action_type == "DONE":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Objective complete {ANSI_RESET}"
)
summary = summarize(messages, objective)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BLUE} Summary\n{ANSI_RESET}{summary}"
)
break
if action_type != "UNKNOWN":
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} {ANSI_RESET}{action_detail}"
)
function_response = ""
if action_type == "SEARCH":
function_response = search(action_detail)
elif action_type == "TYPE":
function_response = keyboard_type(action_detail)
elif action_type == "CLICK":
function_response = mouse_click(action_detail)
else:
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] something went wrong :({ANSI_RESET}"
)
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_RED}[Error] AI response\n{ANSI_RESET}{response}"
)
break
print(
f"{ANSI_GREEN}[Self-Operating Computer]{ANSI_BRIGHT_MAGENTA} [Act] {action_type} COMPLETE {ANSI_RESET}{function_response}"
)
message = {
"role": "assistant",
"content": function_response,
}
messages.append(message)
loop_count += 1
if loop_count > 15:
break
def format_summary_prompt(objective):
"""
Format the summary prompt
"""
prompt = SUMMARY_PROMPT.format(objective=objective)
return prompt
def format_vision_prompt(objective, previous_action):
"""
Format the vision prompt
"""
if previous_action:
previous_action = f"Here was the previous action you took: {previous_action}"
else:
previous_action = ""
prompt = VISION_PROMPT.format(objective=objective, previous_action=previous_action)
return prompt
def format_accurate_mode_vision_prompt(prev_x, prev_y):
"""
Format the accurate mode vision prompt
"""
width = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["width"]) * 100
height = ((ACCURATE_PIXEL_COUNT / 2) / monitor_size["height"]) * 100
prompt = ACCURATE_MODE_VISION_PROMPT.format(
prev_x=prev_x, prev_y=prev_y, width=width, height=height
)
return prompt
def get_next_action(model, messages, objective, accurate_mode):
if model == "gpt-4-vision-preview":
content = get_next_action_from_openai(messages, objective, accurate_mode)
return content
elif model == "agent-1":
return "coming soon"
raise ModelNotRecognizedException(model)
def get_last_assistant_message(messages):
"""
Retrieve the last message from the assistant in the messages array.
If the last assistant message is the first message in the array, return None.
"""
for index in reversed(range(len(messages))):
if messages[index]["role"] == "assistant":
if index == 0: # Check if the assistant message is the first in the array
return None
else:
return messages[index]
return None # Return None if no assistant message is found
def accurate_mode_double_check(pseudo_messages, prev_x, prev_y):
"""
Reprompt OAI with additional screenshot of a mini screenshot centered around the cursor for further finetuning of clicked location
"""
try:
screenshot_filename = os.path.join("screenshots", "screenshot_mini.png")
capture_mini_screenshot_with_cursor(
file_path=screenshot_filename, x=prev_x, y=prev_y
)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_mini_with_grid.png"
)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
accurate_vision_prompt = format_accurate_mode_vision_prompt(prev_x, prev_y)
accurate_mode_message = {
"role": "user",
"content": [
{"type": "text", "text": accurate_vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
pseudo_messages.append(accurate_mode_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error reprompting model for accurate_mode: {e}")
return "ERROR"
def get_next_action_from_openai(messages, objective, accurate_mode):
"""
Get the next action for Self-Operating Computer
"""
# sleep for a second
time.sleep(1)
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
new_screenshot_filename = os.path.join(
"screenshots", "screenshot_with_grid.png"
)
add_grid_to_image(screenshot_filename, new_screenshot_filename, 500)
# sleep for a second
time.sleep(1)
with open(new_screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
previous_action = get_last_assistant_message(messages)
vision_prompt = format_vision_prompt(objective, previous_action)
vision_message = {
"role": "user",
"content": [
{"type": "text", "text": vision_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
pseudo_messages = messages.copy()
pseudo_messages.append(vision_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=pseudo_messages,
presence_penalty=1,
frequency_penalty=1,
temperature=0.7,
max_tokens=300,
)
messages.append(
{
"role": "user",
"content": "`screenshot.png`",
}
)
content = response.choices[0].message.content
if accurate_mode:
if content.startswith("CLICK"):
# Adjust pseudo_messages to include the accurate_mode_message
click_data = re.search(r"CLICK \{ (.+) \}", content).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
prev_x = click_data_json["x"]
prev_y = click_data_json["y"]
if DEBUG:
print(
f"Previous coords before accurate tuning: prev_x {prev_x} prev_y {prev_y}"
)
content = accurate_mode_double_check(pseudo_messages, prev_x, prev_y)
assert content != "ERROR", "ERROR: accurate_mode_double_check failed"
return content
except Exception as e:
print(f"Error parsing JSON: {e}")
return "Failed take action after looking at the screenshot"
def parse_oai_response(response):
if response == "DONE":
return {"type": "DONE", "data": None}
elif response.startswith("CLICK"):
# Adjust the regex to match the correct format
click_data = re.search(r"CLICK \{ (.+) \}", response).group(1)
click_data_json = json.loads(f"{{{click_data}}}")
return {"type": "CLICK", "data": click_data_json}
elif response.startswith("TYPE"):
# Extract the text to type
type_data = re.search(r'TYPE "(.+)"', response, re.DOTALL).group(1)
return {"type": "TYPE", "data": type_data}
elif response.startswith("SEARCH"):
# Extract the search query
search_data = re.search(r'SEARCH "(.+)"', response).group(1)
return {"type": "SEARCH", "data": search_data}
return {"type": "UNKNOWN", "data": response}
def summarize(messages, objective):
try:
screenshots_dir = "screenshots"
if not os.path.exists(screenshots_dir):
os.makedirs(screenshots_dir)
screenshot_filename = os.path.join(screenshots_dir, "summary_screenshot.png")
# Call the function to capture the screen with the cursor
capture_screen_with_cursor(screenshot_filename)
with open(screenshot_filename, "rb") as img_file:
img_base64 = base64.b64encode(img_file.read()).decode("utf-8")
summary_prompt = format_summary_prompt(objective)
summary_message = {
"role": "user",
"content": [
{"type": "text", "text": summary_prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
],
}
# create a copy of messages and save to pseudo_messages
messages.append(summary_message)
response = client.chat.completions.create(
model="gpt-4-vision-preview",
messages=messages,
max_tokens=500,
)
content = response.choices[0].message.content
return content
except Exception as e:
print(f"Error in summarize: {e}")
return "Failed to summarize the workflow"
def mouse_click(click_detail):
try:
x = convert_percent_to_decimal(click_detail["x"])
y = convert_percent_to_decimal(click_detail["y"])
if click_detail and isinstance(x, float) and isinstance(y, float):
click_at_percentage(x, y)
return click_detail["description"]
else:
return "We failed to click"
except Exception as e:
print(f"Error parsing JSON: {e}")
return "We failed to click"
def click_at_percentage(
x_percentage, y_percentage, duration=0.2, circle_radius=50, circle_duration=0.5
):
# Get the size of the primary monitor
screen_width, screen_height = pyautogui.size()
# Calculate the x and y coordinates in pixels
x_pixel = int(screen_width * float(x_percentage))
y_pixel = int(screen_height * float(y_percentage))
# Move to the position smoothly
pyautogui.moveTo(x_pixel, y_pixel, duration=duration)
# Circular movement
start_time = time.time()
while time.time() - start_time < circle_duration:
angle = ((time.time() - start_time) / circle_duration) * 2 * math.pi
x = x_pixel + math.cos(angle) * circle_radius
y = y_pixel + math.sin(angle) * circle_radius
pyautogui.moveTo(x, y, duration=0.1)
# Finally, click
pyautogui.click(x_pixel, y_pixel)
return "Successfully clicked"
def add_grid_to_image(original_image_path, new_image_path, grid_interval):
"""
Add a grid to an image
"""
# Load the image
image = Image.open(original_image_path)
# Create a drawing object
draw = ImageDraw.Draw(image)
# Get the image size
width, height = image.size
# Reduce the font size a bit
font_size = int(grid_interval / 10) # Reduced font size
# Calculate the background size based on the font size
bg_width = int(font_size * 4.2) # Adjust as necessary
bg_height = int(font_size * 1.2) # Adjust as necessary
# Function to draw text with a white rectangle background
def draw_label_with_background(
position, text, draw, font_size, bg_width, bg_height
):
# Adjust the position based on the background size
text_position = (position[0] + bg_width // 2, position[1] + bg_height // 2)
# Draw the text background
draw.rectangle(
[position[0], position[1], position[0] + bg_width, position[1] + bg_height],
fill="white",
)
# Draw the text
draw.text(text_position, text, fill="black", font_size=font_size, anchor="mm")
# Draw vertical lines and labels at every `grid_interval` pixels
for x in range(grid_interval, width, grid_interval):
line = ((x, 0), (x, height))
draw.line(line, fill="blue")
for y in range(grid_interval, height, grid_interval):
# Calculate the percentage of the width and height
x_percent = round((x / width) * 100)
y_percent = round((y / height) * 100)
draw_label_with_background(
(x - bg_width // 2, y - bg_height // 2),
f"{x_percent}%,{y_percent}%",
draw,
font_size,
bg_width,
bg_height,
)
# Draw horizontal lines - labels are already added with vertical lines
for y in range(grid_interval, height, grid_interval):
line = ((0, y), (width, y))
draw.line(line, fill="blue")
# Save the image with the grid
image.save(new_image_path)
def keyboard_type(text):
text = text.replace("\\n", "\n")
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Type: " + text
def search(text):
if platform.system() == "Windows":
pyautogui.press("win")
elif platform.system() == "Linux":
pyautogui.press("win")
else:
# Press and release Command and Space separately
pyautogui.keyDown("command")
pyautogui.press("space")
pyautogui.keyUp("command")
time.sleep(1)
# Now type the text
for char in text:
pyautogui.write(char)
pyautogui.press("enter")
return "Open program: " + text
def capture_mini_screenshot_with_cursor(
file_path=os.path.join("screenshots", "screenshot_mini.png"), x=0, y=0
):
user_platform = platform.system()
if user_platform == "Linux":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
# Define the coordinates for the rectangle
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
x2, y2 = int(x + ACCURATE_PIXEL_COUNT / 2), int(y + ACCURATE_PIXEL_COUNT / 2)
screenshot = ImageGrab.grab(bbox=(x1, y1, x2, y2))
screenshot = screenshot.resize(
(screenshot.width * 2, screenshot.height * 2), Image.LANCZOS
) # upscale the image so it's easier to see and percentage marks more visible
screenshot.save(file_path)
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
elif user_platform == "Darwin":
x = float(x[:-1]) # convert x from "50%" to 50.
y = float(y[:-1])
x = (x / 100) * monitor_size[
"width"
] # convert x from 50 to 0.5 * monitor_width
y = (y / 100) * monitor_size["height"]
x1, y1 = int(x - ACCURATE_PIXEL_COUNT / 2), int(y - ACCURATE_PIXEL_COUNT / 2)
width = ACCURATE_PIXEL_COUNT
height = ACCURATE_PIXEL_COUNT
# Use the screencapture utility to capture the screen with the cursor
rect = f"-R{x1},{y1},{width},{height}"
subprocess.run(["screencapture", "-C", rect, file_path])
screenshots_dir = "screenshots"
grid_screenshot_filename = os.path.join(
screenshots_dir, "screenshot_mini_with_grid.png"
)
add_grid_to_image(
file_path, grid_screenshot_filename, int(ACCURATE_PIXEL_COUNT / 2)
)
def capture_screen_with_cursor(file_path):
user_platform = platform.system()
if user_platform == "Windows":
screenshot = pyautogui.screenshot()
screenshot.save(file_path)
elif user_platform == "Linux":
# Use xlib to prevent scrot dependency for Linux
screen = Xlib.display.Display().screen()
size = screen.width_in_pixels, screen.height_in_pixels
monitor_size["width"] = size[0]
monitor_size["height"] = size[1]
screenshot = ImageGrab.grab(bbox=(0, 0, size[0], size[1]))
screenshot.save(file_path)
elif user_platform == "Darwin": # (Mac OS)
# Use the screencapture utility to capture the screen with the cursor
subprocess.run(["screencapture", "-C", file_path])
else:
print(f"The platform you're using ({user_platform}) is not currently supported")
def extract_json_from_string(s):
# print("extracting json from string", s)
try:
# Find the start of the JSON structure
json_start = s.find("{")
if json_start == -1:
return None
# Extract the JSON part and convert it to a dictionary
json_str = s[json_start:]
return json.loads(json_str)
except Exception as e:
print(f"Error parsing JSON: {e}")
return None
def convert_percent_to_decimal(percent_str):
try:
# Remove the '%' sign and convert to float
decimal_value = float(percent_str.strip("%"))
# Convert to decimal (e.g., 20% -> 0.20)
return decimal_value / 100
except ValueError as e:
print(f"Error converting percent to decimal: {e}")
return None
def main_entry():
parser = argparse.ArgumentParser(
description="Run the self-operating-computer with a specified model."
)
parser.add_argument(
"-m",
"--model",
help="Specify the model to use",
required=False,
default="gpt-4-vision-preview",
)
# Add a voice flag
parser.add_argument(
"--voice",
help="Use voice input mode",
action="store_true",
)
parser.add_argument(
"-accurate",
help="Activate Reflective Mouse Click Mode",
action="store_true",
required=False,
)
try:
args = parser.parse_args()
main(args.model, accurate_mode=args.accurate, voice_mode=args.voice)
except KeyboardInterrupt:
print(f"\n{ANSI_BRIGHT_MAGENTA}Exiting...")
if __name__ == "__main__":
main_entry()
| AzorianMatt | 42da78be9bbae7a6c93a5f763fddcf180cb3ffa8 | 7b09d294aa2b1e7e8d524340951f50aade189921 | I should have noticed that given the fancy IDE I use! I'll get that tweaked here in a moment and posted. | AzorianMatt | 1 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
help='%%-format specification for printing decimal numbers. '
'Defaults to locale-specific formatting with "%%.3f".')
self.argparser.add_argument(
'-G', '--no-grouping-separator', dest='no_grouping_separator', action='store_true',
help='Do not use grouping separators in decimal numbers.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, self.args.decimal_format, self.args.no_grouping_separator)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d, f='%.3f', no_grouping_separator=False):
return locale.format_string(f, d, grouping=not no_grouping_separator).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | Let's use "grouping separator" since some locales have groups of 2, 4, etc. | jpmckinney | 0 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
help='%%-format specification for printing decimal numbers. '
'Defaults to locale-specific formatting with "%%.3f".')
self.argparser.add_argument(
'-G', '--no-grouping-separator', dest='no_grouping_separator', action='store_true',
help='Do not use grouping separators in decimal numbers.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, self.args.decimal_format, self.args.no_grouping_separator)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d, f='%.3f', no_grouping_separator=False):
return locale.format_string(f, d, grouping=not no_grouping_separator).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | ```suggestion
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
``` | jpmckinney | 1 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
help='%%-format specification for printing decimal numbers. '
'Defaults to locale-specific formatting with "%%.3f".')
self.argparser.add_argument(
'-G', '--no-grouping-separator', dest='no_grouping_separator', action='store_true',
help='Do not use grouping separators in decimal numbers.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, self.args.decimal_format, self.args.no_grouping_separator)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d, f='%.3f', no_grouping_separator=False):
return locale.format_string(f, d, grouping=not no_grouping_separator).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | ```suggestion
def format_decimal(d, f='%.3f', grouping=True):
``` | jpmckinney | 2 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
help='%%-format specification for printing decimal numbers. '
'Defaults to locale-specific formatting with "%%.3f".')
self.argparser.add_argument(
'-G', '--no-grouping-separator', dest='no_grouping_separator', action='store_true',
help='Do not use grouping separators in decimal numbers.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, self.args.decimal_format, self.args.no_grouping_separator)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d, f='%.3f', no_grouping_separator=False):
return locale.format_string(f, d, grouping=not no_grouping_separator).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | Also, shouldn't this be `--no-grouping-separator`, since the current behavior is to include it, and we typically don't change current behavior unless there's a bug? | jpmckinney | 3 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
help='%%-format specification for printing decimal numbers. '
'Defaults to locale-specific formatting with "%%.3f".')
self.argparser.add_argument(
'-G', '--no-grouping-separator', dest='no_grouping_separator', action='store_true',
help='Do not use grouping separators in decimal numbers.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, self.args.decimal_format, self.args.no_grouping_separator)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d, f='%.3f', no_grouping_separator=False):
return locale.format_string(f, d, grouping=not no_grouping_separator).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | The reason I wanted to change the default is that having thousands separators by default breaks parsing in most libraries that parse numbers. (I gave an example with `bc` in the related issue.)
Sure it's nicer to read as a human, but for quickly summarizing data and piping output, it's not very usable. | slhck | 4 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
help='%%-format specification for printing decimal numbers. '
'Defaults to locale-specific formatting with "%%.3f".')
self.argparser.add_argument(
'-G', '--no-grouping-separator', dest='no_grouping_separator', action='store_true',
help='Do not use grouping separators in decimal numbers.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, self.args.decimal_format, self.args.no_grouping_separator)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d, f='%.3f', no_grouping_separator=False):
return locale.format_string(f, d, grouping=not no_grouping_separator).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | Hmm, `csvstat`'s output is already not very machine-readable. And in general CSV Kit is designed to be human-friendly by default. I don't see an issue with having to add `--no-grouping-separator` to get machine-readable output. You can add a short option like `-G`. | jpmckinney | 5 |
wireservice/csvkit | 1,180 | Add decimal formatting to csvstat | This allows users to specify a different decimal %-format syntax.
Grouping of numbers can be optionally disabled. | null | 2022-07-26 09:39:48+00:00 | 2022-09-08 16:20:54+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'--decimal-format', dest='decimal_format', type=str, default='%.3f',
help='%%-format specification for printing decimal numbers. '
'Defaults to locale-specific formatting with "%%.3f".')
self.argparser.add_argument(
'-G', '--no-grouping-separator', dest='no_grouping_separator', action='store_true',
help='Do not use grouping separators in decimal numbers.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, self.args.decimal_format, self.args.no_grouping_separator)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v, self.args.decimal_format, self.args.no_grouping_separator)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d, f='%.3f', no_grouping_separator=False):
return locale.format_string(f, d, grouping=not no_grouping_separator).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| slhck | ffbc152e7cac2c273c6a847d154d7c614e5b4c4a | 2150a40c764370ce727278724345fc8ee88c4104 | I see. Maybe I'm (ab)using the csvstat tool. It's just that it's so useful in combination with the other tools!
Your suggestion is a good alternative. I will change the PR accordingly. | slhck | 6 |
wireservice/csvkit | 1,166 | Feature: csvsql: specify --query multiple times #1160 | Hi jpmckinney,
I hope this is what you need. Thank you for your effort.
Best,
Stefan/ badbunnyyy
| null | 2022-03-07 11:23:18+00:00 | 2022-09-06 17:32:14+00:00 | csvkit/utilities/csvsql.py | #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalchemy.dialects'))
class CSVSQL(CSVKitUtility):
description = 'Generate SQL statements for one or more CSV files, or execute those statements directly on a ' \
'database, and execute one or more SQL queries.'
# Override 'f' because the utility accepts multiple files.
override_flags = ['f']
def add_arguments(self):
self.argparser.add_argument(
metavar='FILE', nargs='*', dest='input_paths', default=['-'],
help='The CSV file(s) to operate on. If omitted, will accept input as piped data via STDIN.')
self.argparser.add_argument(
'-i', '--dialect', dest='dialect', choices=DIALECTS,
help='Dialect of SQL to generate. Cannot be used with --db.')
self.argparser.add_argument(
'--db', dest='connection_string',
help='If present, a SQLAlchemy connection string to use to directly execute generated SQL on a database.')
self.argparser.add_argument(
'--query',
help='Execute one or more SQL queries delimited by ";" and output the result of the last query as CSV. '
'QUERY may be a filename.')
self.argparser.add_argument(
'--insert', dest='insert', action='store_true',
help='Insert the data into the table. Requires --db.')
self.argparser.add_argument(
'--prefix', action='append', default=[],
help='Add an expression following the INSERT keyword, like OR IGNORE or OR REPLACE.')
self.argparser.add_argument(
'--before-insert', dest='before_insert',
help='Execute SQL before the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--after-insert', dest='after_insert',
help='Execute SQL after the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--tables', dest='table_names',
help='A comma-separated list of names of tables to be created. By default, the tables will be named after '
'the filenames without extensions or "stdin".')
self.argparser.add_argument(
'--no-constraints', dest='no_constraints', action='store_true',
help='Generate a schema without length limits or null checks. Useful when sampling big tables.')
self.argparser.add_argument(
'--unique-constraint', dest='unique_constraint',
help='A column-separated list of names of columns to include in a UNIQUE constraint.')
self.argparser.add_argument(
'--no-create', dest='no_create', action='store_true',
help='Skip creating the table. Requires --insert.')
self.argparser.add_argument(
'--create-if-not-exists', dest='create_if_not_exists', action='store_true',
help='Create the table if it does not exist, otherwise keep going. Requires --insert.')
self.argparser.add_argument(
'--overwrite', dest='overwrite', action='store_true',
help='Drop the table if it already exists. Requires --insert. Cannot be used with --no-create.')
self.argparser.add_argument(
'--db-schema', dest='db_schema',
help='Optional name of database schema to create table(s) in.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
self.argparser.add_argument(
'-I', '--no-inference', dest='no_inference', action='store_true',
help='Disable type inference when parsing the input.')
self.argparser.add_argument(
'--chunk-size', dest='chunk_size', type=int,
help='Chunk size for batch insert into the table. Requires --insert.')
def main(self):
if isatty(sys.stdin) and not self.args.input_paths:
self.argparser.error('You must provide an input file or piped data.')
self.input_files = []
self.connection = None
self.table_names = []
self.unique_constraint = []
if self.args.table_names:
self.table_names = self.args.table_names.split(',')
if self.args.unique_constraint:
self.unique_constraint = self.args.unique_constraint.split(',')
# Create an SQLite database in memory if no connection string is specified
if self.args.query and not self.args.connection_string:
self.args.connection_string = "sqlite:///:memory:"
self.args.insert = True
if self.args.dialect and self.args.connection_string:
self.argparser.error('The --dialect option is only valid when neither --db nor --query are specified.')
if self.args.insert and not self.args.connection_string:
self.argparser.error('The --insert option is only valid when either --db or --query is specified.')
if self.args.no_create and not self.args.insert:
self.argparser.error('The --no-create option is only valid if --insert is also specified.')
if self.args.create_if_not_exists and not self.args.insert:
self.argparser.error('The --create-if-not-exists option is only valid if --insert is also specified.')
if self.args.overwrite and not self.args.insert:
self.argparser.error('The --overwrite option is only valid if --insert is also specified.')
if self.args.overwrite and self.args.no_create:
self.argparser.error('The --overwrite option is only valid if --no-create is not specified.')
if self.args.before_insert and not self.args.insert:
self.argparser.error('The --before-insert option is only valid if --insert is also specified.')
if self.args.after_insert and not self.args.insert:
self.argparser.error('The --after-insert option is only valid if --insert is also specified.')
if self.args.chunk_size and not self.args.insert:
self.argparser.error('The --chunk-size option is only valid if --insert is also specified.')
if self.args.no_create and self.args.create_if_not_exists:
self.argparser.error('The --no-create and --create-if-not-exists options are mutually exclusive.')
# Lazy open files
for path in self.args.input_paths:
self.input_files.append(self._open_input_file(path))
# Establish database validity before reading CSV files
if self.args.connection_string:
try:
engine = create_engine(self.args.connection_string)
except ImportError as e:
six.raise_from(ImportError(
"You don't appear to have the necessary database backend installed for connection string you're "
"trying to use. Available backends include:\n\nPostgreSQL:\tpip install psycopg2\nMySQL:\t\tpip "
"install mysql-connector-python OR pip install mysqlclient\n\nFor details on connection strings "
"and other backends, please see the SQLAlchemy documentation on dialects at:\n\n"
"http://www.sqlalchemy.org/docs/dialects/\n\n"
), e)
self.connection = engine.connect()
try:
self._failsafe_main()
finally:
for f in self.input_files:
f.close()
if self.connection:
self.connection.close()
def _failsafe_main(self):
"""
Inner main function. If anything fails in here, file handles and
database connections will be safely closed.
"""
if self.connection:
transaction = self.connection.begin()
for f in self.input_files:
try:
# Try to use name specified via --tables
table_name = self.table_names.pop(0)
except IndexError:
if f == sys.stdin:
table_name = "stdin"
else:
# Use filename as table name
table_name = os.path.splitext(os.path.basename(f.name))[0]
table = None
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
try:
table = agate.Table.from_csv(
f,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
column_types=self.get_column_types(),
**self.reader_kwargs
)
except StopIteration:
# Catch cases where no table data was provided and fall through
# to query logic
continue
if table:
if self.connection:
if self.args.before_insert:
for query in self.args.before_insert.split(';'):
self.connection.execute(query)
table.to_sql(
self.connection,
table_name,
overwrite=self.args.overwrite,
create=not self.args.no_create,
create_if_not_exists=self.args.create_if_not_exists,
insert=self.args.insert and len(table.rows) > 0,
prefixes=self.args.prefix,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint,
chunk_size=self.args.chunk_size
)
if self.args.after_insert:
for query in self.args.after_insert.split(';'):
self.connection.execute(query)
# Output SQL statements
else:
statement = table.to_sql_create_statement(
table_name,
dialect=self.args.dialect,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint
)
self.output_file.write('%s\n' % statement)
if self.connection:
if self.args.query:
if os.path.exists(self.args.query):
with open(self.args.query, 'r') as f:
query = f.read()
else:
query = self.args.query
# Execute the specified SQL queries.
queries = query.split(';')
rows = None
for q in queries:
if q.strip():
rows = self.connection.execute(q)
# Output the result of the last query as CSV
if rows.returns_rows:
output = agate.csv.writer(self.output_file, **self.writer_kwargs)
output.writerow(rows._metadata.keys)
for row in rows:
output.writerow(row)
transaction.commit()
def launch_new_instance():
utility = CSVSQL()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalchemy.dialects'))
class CSVSQL(CSVKitUtility):
description = 'Generate SQL statements for one or more CSV files, or execute those statements directly on a ' \
'database, and execute one or more SQL queries.'
# Override 'f' because the utility accepts multiple files.
override_flags = ['f']
def add_arguments(self):
self.argparser.add_argument(
metavar='FILE', nargs='*', dest='input_paths', default=['-'],
help='The CSV file(s) to operate on. If omitted, will accept input as piped data via STDIN.')
self.argparser.add_argument(
'-i', '--dialect', dest='dialect', choices=DIALECTS,
help='Dialect of SQL to generate. Cannot be used with --db.')
self.argparser.add_argument(
'--db', dest='connection_string',
help='If present, a SQLAlchemy connection string to use to directly execute generated SQL on a database.')
self.argparser.add_argument(
'--query', dest='queries', action='append',
help='Execute one or more SQL queries delimited by ";" and output the result of the last query as CSV. '
'QUERY may be a filename. --query may be specified multiple times.')
self.argparser.add_argument(
'--insert', dest='insert', action='store_true',
help='Insert the data into the table. Requires --db.')
self.argparser.add_argument(
'--prefix', action='append', default=[],
help='Add an expression following the INSERT keyword, like OR IGNORE or OR REPLACE.')
self.argparser.add_argument(
'--before-insert', dest='before_insert',
help='Execute SQL before the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--after-insert', dest='after_insert',
help='Execute SQL after the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--tables', dest='table_names',
help='A comma-separated list of names of tables to be created. By default, the tables will be named after '
'the filenames without extensions or "stdin".')
self.argparser.add_argument(
'--no-constraints', dest='no_constraints', action='store_true',
help='Generate a schema without length limits or null checks. Useful when sampling big tables.')
self.argparser.add_argument(
'--unique-constraint', dest='unique_constraint',
help='A column-separated list of names of columns to include in a UNIQUE constraint.')
self.argparser.add_argument(
'--no-create', dest='no_create', action='store_true',
help='Skip creating the table. Requires --insert.')
self.argparser.add_argument(
'--create-if-not-exists', dest='create_if_not_exists', action='store_true',
help='Create the table if it does not exist, otherwise keep going. Requires --insert.')
self.argparser.add_argument(
'--overwrite', dest='overwrite', action='store_true',
help='Drop the table if it already exists. Requires --insert. Cannot be used with --no-create.')
self.argparser.add_argument(
'--db-schema', dest='db_schema',
help='Optional name of database schema to create table(s) in.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
self.argparser.add_argument(
'-I', '--no-inference', dest='no_inference', action='store_true',
help='Disable type inference when parsing the input.')
self.argparser.add_argument(
'--chunk-size', dest='chunk_size', type=int,
help='Chunk size for batch insert into the table. Requires --insert.')
def main(self):
if isatty(sys.stdin) and not self.args.input_paths:
self.argparser.error('You must provide an input file or piped data.')
self.input_files = []
self.connection = None
self.table_names = []
self.unique_constraint = []
if self.args.table_names:
self.table_names = self.args.table_names.split(',')
if self.args.unique_constraint:
self.unique_constraint = self.args.unique_constraint.split(',')
# Create an SQLite database in memory if no connection string is specified
if self.args.queries and not self.args.connection_string:
self.args.connection_string = "sqlite:///:memory:"
self.args.insert = True
if self.args.dialect and self.args.connection_string:
self.argparser.error('The --dialect option is only valid when neither --db nor --query are specified.')
if self.args.insert and not self.args.connection_string:
self.argparser.error('The --insert option is only valid when either --db or --query is specified.')
if self.args.no_create and not self.args.insert:
self.argparser.error('The --no-create option is only valid if --insert is also specified.')
if self.args.create_if_not_exists and not self.args.insert:
self.argparser.error('The --create-if-not-exists option is only valid if --insert is also specified.')
if self.args.overwrite and not self.args.insert:
self.argparser.error('The --overwrite option is only valid if --insert is also specified.')
if self.args.overwrite and self.args.no_create:
self.argparser.error('The --overwrite option is only valid if --no-create is not specified.')
if self.args.before_insert and not self.args.insert:
self.argparser.error('The --before-insert option is only valid if --insert is also specified.')
if self.args.after_insert and not self.args.insert:
self.argparser.error('The --after-insert option is only valid if --insert is also specified.')
if self.args.chunk_size and not self.args.insert:
self.argparser.error('The --chunk-size option is only valid if --insert is also specified.')
if self.args.no_create and self.args.create_if_not_exists:
self.argparser.error('The --no-create and --create-if-not-exists options are mutually exclusive.')
# Lazy open files
for path in self.args.input_paths:
self.input_files.append(self._open_input_file(path))
# Establish database validity before reading CSV files
if self.args.connection_string:
try:
engine = create_engine(self.args.connection_string)
except ImportError as e:
six.raise_from(ImportError(
"You don't appear to have the necessary database backend installed for connection string you're "
"trying to use. Available backends include:\n\nPostgreSQL:\tpip install psycopg2\nMySQL:\t\tpip "
"install mysql-connector-python OR pip install mysqlclient\n\nFor details on connection strings "
"and other backends, please see the SQLAlchemy documentation on dialects at:\n\n"
"http://www.sqlalchemy.org/docs/dialects/\n\n"
), e)
self.connection = engine.connect()
try:
self._failsafe_main()
finally:
for f in self.input_files:
f.close()
if self.connection:
self.connection.close()
def _failsafe_main(self):
"""
Inner main function. If anything fails in here, file handles and
database connections will be safely closed.
"""
if self.connection:
transaction = self.connection.begin()
for f in self.input_files:
try:
# Try to use name specified via --tables
table_name = self.table_names.pop(0)
except IndexError:
if f == sys.stdin:
table_name = "stdin"
else:
# Use filename as table name
table_name = os.path.splitext(os.path.basename(f.name))[0]
table = None
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
try:
table = agate.Table.from_csv(
f,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
column_types=self.get_column_types(),
**self.reader_kwargs
)
except StopIteration:
# Catch cases where no table data was provided and fall through
# to query logic
continue
if table:
if self.connection:
if self.args.before_insert:
for query in self.args.before_insert.split(';'):
self.connection.execute(query)
table.to_sql(
self.connection,
table_name,
overwrite=self.args.overwrite,
create=not self.args.no_create,
create_if_not_exists=self.args.create_if_not_exists,
insert=self.args.insert and len(table.rows) > 0,
prefixes=self.args.prefix,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint,
chunk_size=self.args.chunk_size
)
if self.args.after_insert:
for query in self.args.after_insert.split(';'):
self.connection.execute(query)
# Output SQL statements
else:
statement = table.to_sql_create_statement(
table_name,
dialect=self.args.dialect,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint
)
self.output_file.write('%s\n' % statement)
if self.connection:
if self.args.queries:
queries = []
for query in self.args.queries:
if os.path.exists(query):
with open(query, 'r') as f:
query = f.read()
queries += query.split(';')
# Execute the specified SQL queries.
rows = None
for query in queries:
if query.strip():
rows = self.connection.execute(query)
# Output the result of the last query as CSV
if rows.returns_rows:
output = agate.csv.writer(self.output_file, **self.writer_kwargs)
output.writerow(rows._metadata.keys)
for row in rows:
output.writerow(row)
transaction.commit()
def launch_new_instance():
utility = CSVSQL()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| badbunnyyy | bb34039742b0e91ce9cc26039c4292ec258fcdd1 | a758c2a1e4e636b6c66cd3d935503d2786fc53a4 | This means that `rows` will be set to whatever the last query returns, right? Wouldn't it be better to store all the rows from all queries in a (flat) list? | slhck | 7 |
wireservice/csvkit | 1,166 | Feature: csvsql: specify --query multiple times #1160 | Hi jpmckinney,
I hope this is what you need. Thank you for your effort.
Best,
Stefan/ badbunnyyy
| null | 2022-03-07 11:23:18+00:00 | 2022-09-06 17:32:14+00:00 | csvkit/utilities/csvsql.py | #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalchemy.dialects'))
class CSVSQL(CSVKitUtility):
description = 'Generate SQL statements for one or more CSV files, or execute those statements directly on a ' \
'database, and execute one or more SQL queries.'
# Override 'f' because the utility accepts multiple files.
override_flags = ['f']
def add_arguments(self):
self.argparser.add_argument(
metavar='FILE', nargs='*', dest='input_paths', default=['-'],
help='The CSV file(s) to operate on. If omitted, will accept input as piped data via STDIN.')
self.argparser.add_argument(
'-i', '--dialect', dest='dialect', choices=DIALECTS,
help='Dialect of SQL to generate. Cannot be used with --db.')
self.argparser.add_argument(
'--db', dest='connection_string',
help='If present, a SQLAlchemy connection string to use to directly execute generated SQL on a database.')
self.argparser.add_argument(
'--query',
help='Execute one or more SQL queries delimited by ";" and output the result of the last query as CSV. '
'QUERY may be a filename.')
self.argparser.add_argument(
'--insert', dest='insert', action='store_true',
help='Insert the data into the table. Requires --db.')
self.argparser.add_argument(
'--prefix', action='append', default=[],
help='Add an expression following the INSERT keyword, like OR IGNORE or OR REPLACE.')
self.argparser.add_argument(
'--before-insert', dest='before_insert',
help='Execute SQL before the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--after-insert', dest='after_insert',
help='Execute SQL after the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--tables', dest='table_names',
help='A comma-separated list of names of tables to be created. By default, the tables will be named after '
'the filenames without extensions or "stdin".')
self.argparser.add_argument(
'--no-constraints', dest='no_constraints', action='store_true',
help='Generate a schema without length limits or null checks. Useful when sampling big tables.')
self.argparser.add_argument(
'--unique-constraint', dest='unique_constraint',
help='A column-separated list of names of columns to include in a UNIQUE constraint.')
self.argparser.add_argument(
'--no-create', dest='no_create', action='store_true',
help='Skip creating the table. Requires --insert.')
self.argparser.add_argument(
'--create-if-not-exists', dest='create_if_not_exists', action='store_true',
help='Create the table if it does not exist, otherwise keep going. Requires --insert.')
self.argparser.add_argument(
'--overwrite', dest='overwrite', action='store_true',
help='Drop the table if it already exists. Requires --insert. Cannot be used with --no-create.')
self.argparser.add_argument(
'--db-schema', dest='db_schema',
help='Optional name of database schema to create table(s) in.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
self.argparser.add_argument(
'-I', '--no-inference', dest='no_inference', action='store_true',
help='Disable type inference when parsing the input.')
self.argparser.add_argument(
'--chunk-size', dest='chunk_size', type=int,
help='Chunk size for batch insert into the table. Requires --insert.')
def main(self):
if isatty(sys.stdin) and not self.args.input_paths:
self.argparser.error('You must provide an input file or piped data.')
self.input_files = []
self.connection = None
self.table_names = []
self.unique_constraint = []
if self.args.table_names:
self.table_names = self.args.table_names.split(',')
if self.args.unique_constraint:
self.unique_constraint = self.args.unique_constraint.split(',')
# Create an SQLite database in memory if no connection string is specified
if self.args.query and not self.args.connection_string:
self.args.connection_string = "sqlite:///:memory:"
self.args.insert = True
if self.args.dialect and self.args.connection_string:
self.argparser.error('The --dialect option is only valid when neither --db nor --query are specified.')
if self.args.insert and not self.args.connection_string:
self.argparser.error('The --insert option is only valid when either --db or --query is specified.')
if self.args.no_create and not self.args.insert:
self.argparser.error('The --no-create option is only valid if --insert is also specified.')
if self.args.create_if_not_exists and not self.args.insert:
self.argparser.error('The --create-if-not-exists option is only valid if --insert is also specified.')
if self.args.overwrite and not self.args.insert:
self.argparser.error('The --overwrite option is only valid if --insert is also specified.')
if self.args.overwrite and self.args.no_create:
self.argparser.error('The --overwrite option is only valid if --no-create is not specified.')
if self.args.before_insert and not self.args.insert:
self.argparser.error('The --before-insert option is only valid if --insert is also specified.')
if self.args.after_insert and not self.args.insert:
self.argparser.error('The --after-insert option is only valid if --insert is also specified.')
if self.args.chunk_size and not self.args.insert:
self.argparser.error('The --chunk-size option is only valid if --insert is also specified.')
if self.args.no_create and self.args.create_if_not_exists:
self.argparser.error('The --no-create and --create-if-not-exists options are mutually exclusive.')
# Lazy open files
for path in self.args.input_paths:
self.input_files.append(self._open_input_file(path))
# Establish database validity before reading CSV files
if self.args.connection_string:
try:
engine = create_engine(self.args.connection_string)
except ImportError as e:
six.raise_from(ImportError(
"You don't appear to have the necessary database backend installed for connection string you're "
"trying to use. Available backends include:\n\nPostgreSQL:\tpip install psycopg2\nMySQL:\t\tpip "
"install mysql-connector-python OR pip install mysqlclient\n\nFor details on connection strings "
"and other backends, please see the SQLAlchemy documentation on dialects at:\n\n"
"http://www.sqlalchemy.org/docs/dialects/\n\n"
), e)
self.connection = engine.connect()
try:
self._failsafe_main()
finally:
for f in self.input_files:
f.close()
if self.connection:
self.connection.close()
def _failsafe_main(self):
"""
Inner main function. If anything fails in here, file handles and
database connections will be safely closed.
"""
if self.connection:
transaction = self.connection.begin()
for f in self.input_files:
try:
# Try to use name specified via --tables
table_name = self.table_names.pop(0)
except IndexError:
if f == sys.stdin:
table_name = "stdin"
else:
# Use filename as table name
table_name = os.path.splitext(os.path.basename(f.name))[0]
table = None
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
try:
table = agate.Table.from_csv(
f,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
column_types=self.get_column_types(),
**self.reader_kwargs
)
except StopIteration:
# Catch cases where no table data was provided and fall through
# to query logic
continue
if table:
if self.connection:
if self.args.before_insert:
for query in self.args.before_insert.split(';'):
self.connection.execute(query)
table.to_sql(
self.connection,
table_name,
overwrite=self.args.overwrite,
create=not self.args.no_create,
create_if_not_exists=self.args.create_if_not_exists,
insert=self.args.insert and len(table.rows) > 0,
prefixes=self.args.prefix,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint,
chunk_size=self.args.chunk_size
)
if self.args.after_insert:
for query in self.args.after_insert.split(';'):
self.connection.execute(query)
# Output SQL statements
else:
statement = table.to_sql_create_statement(
table_name,
dialect=self.args.dialect,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint
)
self.output_file.write('%s\n' % statement)
if self.connection:
if self.args.query:
if os.path.exists(self.args.query):
with open(self.args.query, 'r') as f:
query = f.read()
else:
query = self.args.query
# Execute the specified SQL queries.
queries = query.split(';')
rows = None
for q in queries:
if q.strip():
rows = self.connection.execute(q)
# Output the result of the last query as CSV
if rows.returns_rows:
output = agate.csv.writer(self.output_file, **self.writer_kwargs)
output.writerow(rows._metadata.keys)
for row in rows:
output.writerow(row)
transaction.commit()
def launch_new_instance():
utility = CSVSQL()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import os.path
import sys
import agate
import agatesql # noqa: F401
import six
from pkg_resources import iter_entry_points
from sqlalchemy import create_engine, dialects
from csvkit.cli import CSVKitUtility, isatty
DIALECTS = dialects.__all__ + tuple(e.name for e in iter_entry_points('sqlalchemy.dialects'))
class CSVSQL(CSVKitUtility):
description = 'Generate SQL statements for one or more CSV files, or execute those statements directly on a ' \
'database, and execute one or more SQL queries.'
# Override 'f' because the utility accepts multiple files.
override_flags = ['f']
def add_arguments(self):
self.argparser.add_argument(
metavar='FILE', nargs='*', dest='input_paths', default=['-'],
help='The CSV file(s) to operate on. If omitted, will accept input as piped data via STDIN.')
self.argparser.add_argument(
'-i', '--dialect', dest='dialect', choices=DIALECTS,
help='Dialect of SQL to generate. Cannot be used with --db.')
self.argparser.add_argument(
'--db', dest='connection_string',
help='If present, a SQLAlchemy connection string to use to directly execute generated SQL on a database.')
self.argparser.add_argument(
'--query', dest='queries', action='append',
help='Execute one or more SQL queries delimited by ";" and output the result of the last query as CSV. '
'QUERY may be a filename. --query may be specified multiple times.')
self.argparser.add_argument(
'--insert', dest='insert', action='store_true',
help='Insert the data into the table. Requires --db.')
self.argparser.add_argument(
'--prefix', action='append', default=[],
help='Add an expression following the INSERT keyword, like OR IGNORE or OR REPLACE.')
self.argparser.add_argument(
'--before-insert', dest='before_insert',
help='Execute SQL before the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--after-insert', dest='after_insert',
help='Execute SQL after the INSERT command. Requires --insert.')
self.argparser.add_argument(
'--tables', dest='table_names',
help='A comma-separated list of names of tables to be created. By default, the tables will be named after '
'the filenames without extensions or "stdin".')
self.argparser.add_argument(
'--no-constraints', dest='no_constraints', action='store_true',
help='Generate a schema without length limits or null checks. Useful when sampling big tables.')
self.argparser.add_argument(
'--unique-constraint', dest='unique_constraint',
help='A column-separated list of names of columns to include in a UNIQUE constraint.')
self.argparser.add_argument(
'--no-create', dest='no_create', action='store_true',
help='Skip creating the table. Requires --insert.')
self.argparser.add_argument(
'--create-if-not-exists', dest='create_if_not_exists', action='store_true',
help='Create the table if it does not exist, otherwise keep going. Requires --insert.')
self.argparser.add_argument(
'--overwrite', dest='overwrite', action='store_true',
help='Drop the table if it already exists. Requires --insert. Cannot be used with --no-create.')
self.argparser.add_argument(
'--db-schema', dest='db_schema',
help='Optional name of database schema to create table(s) in.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
self.argparser.add_argument(
'-I', '--no-inference', dest='no_inference', action='store_true',
help='Disable type inference when parsing the input.')
self.argparser.add_argument(
'--chunk-size', dest='chunk_size', type=int,
help='Chunk size for batch insert into the table. Requires --insert.')
def main(self):
if isatty(sys.stdin) and not self.args.input_paths:
self.argparser.error('You must provide an input file or piped data.')
self.input_files = []
self.connection = None
self.table_names = []
self.unique_constraint = []
if self.args.table_names:
self.table_names = self.args.table_names.split(',')
if self.args.unique_constraint:
self.unique_constraint = self.args.unique_constraint.split(',')
# Create an SQLite database in memory if no connection string is specified
if self.args.queries and not self.args.connection_string:
self.args.connection_string = "sqlite:///:memory:"
self.args.insert = True
if self.args.dialect and self.args.connection_string:
self.argparser.error('The --dialect option is only valid when neither --db nor --query are specified.')
if self.args.insert and not self.args.connection_string:
self.argparser.error('The --insert option is only valid when either --db or --query is specified.')
if self.args.no_create and not self.args.insert:
self.argparser.error('The --no-create option is only valid if --insert is also specified.')
if self.args.create_if_not_exists and not self.args.insert:
self.argparser.error('The --create-if-not-exists option is only valid if --insert is also specified.')
if self.args.overwrite and not self.args.insert:
self.argparser.error('The --overwrite option is only valid if --insert is also specified.')
if self.args.overwrite and self.args.no_create:
self.argparser.error('The --overwrite option is only valid if --no-create is not specified.')
if self.args.before_insert and not self.args.insert:
self.argparser.error('The --before-insert option is only valid if --insert is also specified.')
if self.args.after_insert and not self.args.insert:
self.argparser.error('The --after-insert option is only valid if --insert is also specified.')
if self.args.chunk_size and not self.args.insert:
self.argparser.error('The --chunk-size option is only valid if --insert is also specified.')
if self.args.no_create and self.args.create_if_not_exists:
self.argparser.error('The --no-create and --create-if-not-exists options are mutually exclusive.')
# Lazy open files
for path in self.args.input_paths:
self.input_files.append(self._open_input_file(path))
# Establish database validity before reading CSV files
if self.args.connection_string:
try:
engine = create_engine(self.args.connection_string)
except ImportError as e:
six.raise_from(ImportError(
"You don't appear to have the necessary database backend installed for connection string you're "
"trying to use. Available backends include:\n\nPostgreSQL:\tpip install psycopg2\nMySQL:\t\tpip "
"install mysql-connector-python OR pip install mysqlclient\n\nFor details on connection strings "
"and other backends, please see the SQLAlchemy documentation on dialects at:\n\n"
"http://www.sqlalchemy.org/docs/dialects/\n\n"
), e)
self.connection = engine.connect()
try:
self._failsafe_main()
finally:
for f in self.input_files:
f.close()
if self.connection:
self.connection.close()
def _failsafe_main(self):
"""
Inner main function. If anything fails in here, file handles and
database connections will be safely closed.
"""
if self.connection:
transaction = self.connection.begin()
for f in self.input_files:
try:
# Try to use name specified via --tables
table_name = self.table_names.pop(0)
except IndexError:
if f == sys.stdin:
table_name = "stdin"
else:
# Use filename as table name
table_name = os.path.splitext(os.path.basename(f.name))[0]
table = None
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
try:
table = agate.Table.from_csv(
f,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
column_types=self.get_column_types(),
**self.reader_kwargs
)
except StopIteration:
# Catch cases where no table data was provided and fall through
# to query logic
continue
if table:
if self.connection:
if self.args.before_insert:
for query in self.args.before_insert.split(';'):
self.connection.execute(query)
table.to_sql(
self.connection,
table_name,
overwrite=self.args.overwrite,
create=not self.args.no_create,
create_if_not_exists=self.args.create_if_not_exists,
insert=self.args.insert and len(table.rows) > 0,
prefixes=self.args.prefix,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint,
chunk_size=self.args.chunk_size
)
if self.args.after_insert:
for query in self.args.after_insert.split(';'):
self.connection.execute(query)
# Output SQL statements
else:
statement = table.to_sql_create_statement(
table_name,
dialect=self.args.dialect,
db_schema=self.args.db_schema,
constraints=not self.args.no_constraints,
unique_constraint=self.unique_constraint
)
self.output_file.write('%s\n' % statement)
if self.connection:
if self.args.queries:
queries = []
for query in self.args.queries:
if os.path.exists(query):
with open(query, 'r') as f:
query = f.read()
queries += query.split(';')
# Execute the specified SQL queries.
rows = None
for query in queries:
if query.strip():
rows = self.connection.execute(query)
# Output the result of the last query as CSV
if rows.returns_rows:
output = agate.csv.writer(self.output_file, **self.writer_kwargs)
output.writerow(rows._metadata.keys)
for row in rows:
output.writerow(row)
transaction.commit()
def launch_new_instance():
utility = CSVSQL()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| badbunnyyy | bb34039742b0e91ce9cc26039c4292ec258fcdd1 | a758c2a1e4e636b6c66cd3d935503d2786fc53a4 | Yes, that is the existing functionality, so not a problem with the PR. | jpmckinney | 8 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2020-financial-year-provisional/Download-data/annual-enterprise-survey-2020-financial-year-provisional-csv.csv > nz.csv
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5743k 100 5743k 0 0 742k 0 0:00:07 0:00:07 --:--:-- 924k
# See performance on master (~24 seconds)
~/Code/csvkit (master) $ time csvstat nz.csv > before.txt
csvstat nz.csv > before.txt 24.07s user 0.20s system 98% cpu 24.591 total
# See performance on feature branch (~2.6 seconds)
~/Code/csvkit (master) $ git checkout sniff-limit
Switched to branch 'sniff-limit'
~/Code/csvkit (sniff-limit) $ time csvstat nz.csv > after.txt
csvstat nz.csv > after.txt 2.38s user 0.15s system 96% cpu 2.629 total
# Check contents are identical
~/Code/csvkit (sniff-limit) $ diff before.txt after.txt
~/Code/csvkit (sniff-limit) $
```
This results in a 10x speedup for very large CSVs! I'll write some notes by each change I'm making. | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import warnings
from collections import OrderedDict
from decimal import Decimal
import agate
import six
from babel.numbers import format_decimal
from csvkit.cli import CSVKitUtility, parse_column_identifiers
NoneType = type(None)
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int,
help='Limit CSV dialect sniffing to the specified number of bytes. Specify "0" to disable sniffing.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=self.args.sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, locale=agate.config.get_option('default_locale'))
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row[column_name]), row['Count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, locale=agate.config.get_option('default_locale'))
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row[column_name]
if self.is_finite_decimal(v):
v = format_decimal(v, locale=agate.config.get_option('default_locale'))
else:
v = six.text_type(row[column_name])
self.output_file.write(u'{} ({}x)\n'.format(v, row['Count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row[column_name]) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
return table.pivot(column_id).order_by('Count', reverse=True).limit(freq_count)
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | Currently, `csvkit` functions do not use a sniff-limit by default. This feels problematic, as very large CSVs will spend huge runtimes to figure out the delimiter, which is inferable from a sample.
If we look at [python's documentation](https://docs.python.org/3/library/csv.html#csv.Sniffer), they call the string input a `sample` and use 1024 by default. I chose 1024 to align with this, but am okay changing the default! | dannysepler | 9 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2020-financial-year-provisional/Download-data/annual-enterprise-survey-2020-financial-year-provisional-csv.csv > nz.csv
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5743k 100 5743k 0 0 742k 0 0:00:07 0:00:07 --:--:-- 924k
# See performance on master (~24 seconds)
~/Code/csvkit (master) $ time csvstat nz.csv > before.txt
csvstat nz.csv > before.txt 24.07s user 0.20s system 98% cpu 24.591 total
# See performance on feature branch (~2.6 seconds)
~/Code/csvkit (master) $ git checkout sniff-limit
Switched to branch 'sniff-limit'
~/Code/csvkit (sniff-limit) $ time csvstat nz.csv > after.txt
csvstat nz.csv > after.txt 2.38s user 0.15s system 96% cpu 2.629 total
# Check contents are identical
~/Code/csvkit (sniff-limit) $ diff before.txt after.txt
~/Code/csvkit (sniff-limit) $
```
This results in a 10x speedup for very large CSVs! I'll write some notes by each change I'm making. | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import warnings
from collections import OrderedDict
from decimal import Decimal
import agate
import six
from babel.numbers import format_decimal
from csvkit.cli import CSVKitUtility, parse_column_identifiers
NoneType = type(None)
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int,
help='Limit CSV dialect sniffing to the specified number of bytes. Specify "0" to disable sniffing.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=self.args.sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, locale=agate.config.get_option('default_locale'))
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row[column_name]), row['Count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, locale=agate.config.get_option('default_locale'))
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row[column_name]
if self.is_finite_decimal(v):
v = format_decimal(v, locale=agate.config.get_option('default_locale'))
else:
v = six.text_type(row[column_name])
self.output_file.write(u'{} ({}x)\n'.format(v, row['Count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row[column_name]) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
return table.pivot(column_id).order_by('Count', reverse=True).limit(freq_count)
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | the `.pivot` agate function is slow, mainly due to `.groupby` forking a given agate table and creating a ton of tiny tables. If we just use a plain ol' python counter, we can get the same results in a much faster / simpler way. | dannysepler | 10 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2020-financial-year-provisional/Download-data/annual-enterprise-survey-2020-financial-year-provisional-csv.csv > nz.csv
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5743k 100 5743k 0 0 742k 0 0:00:07 0:00:07 --:--:-- 924k
# See performance on master (~24 seconds)
~/Code/csvkit (master) $ time csvstat nz.csv > before.txt
csvstat nz.csv > before.txt 24.07s user 0.20s system 98% cpu 24.591 total
# See performance on feature branch (~2.6 seconds)
~/Code/csvkit (master) $ git checkout sniff-limit
Switched to branch 'sniff-limit'
~/Code/csvkit (sniff-limit) $ time csvstat nz.csv > after.txt
csvstat nz.csv > after.txt 2.38s user 0.15s system 96% cpu 2.629 total
# Check contents are identical
~/Code/csvkit (sniff-limit) $ diff before.txt after.txt
~/Code/csvkit (sniff-limit) $
```
This results in a 10x speedup for very large CSVs! I'll write some notes by each change I'm making. | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | csvkit/utilities/csvstat.py | #!/usr/bin/env python
import codecs
import warnings
from collections import OrderedDict
from decimal import Decimal
import agate
import six
from babel.numbers import format_decimal
from csvkit.cli import CSVKitUtility, parse_column_identifiers
NoneType = type(None)
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int,
help='Limit CSV dialect sniffing to the specified number of bytes. Specify "0" to disable sniffing.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=self.args.sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat, locale=agate.config.get_option('default_locale'))
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row[column_name]), row['Count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v, locale=agate.config.get_option('default_locale'))
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row[column_name]
if self.is_finite_decimal(v):
v = format_decimal(v, locale=agate.config.get_option('default_locale'))
else:
v = six.text_type(row[column_name])
self.output_file.write(u'{} ({}x)\n'.format(v, row['Count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row[column_name]) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
return table.pivot(column_id).order_by('Count', reverse=True).limit(freq_count)
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| #!/usr/bin/env python
import codecs
import locale
import warnings
from collections import Counter, OrderedDict
from decimal import Decimal
import agate
import six
from csvkit.cli import CSVKitUtility, parse_column_identifiers
locale.setlocale(locale.LC_ALL, '')
OPERATIONS = OrderedDict([
('type', {
'aggregation': None,
'label': 'Type of data: '
}),
('nulls', {
'aggregation': agate.HasNulls,
'label': 'Contains null values: '
}),
('unique', {
'aggregation': None,
'label': 'Unique values: '
}),
('min', {
'aggregation': agate.Min,
'label': 'Smallest value: '
}),
('max', {
'aggregation': agate.Max,
'label': 'Largest value: '
}),
('sum', {
'aggregation': agate.Sum,
'label': 'Sum: '
}),
('mean', {
'aggregation': agate.Mean,
'label': 'Mean: '
}),
('median', {
'aggregation': agate.Median,
'label': 'Median: '
}),
('stdev', {
'aggregation': agate.StDev,
'label': 'StDev: '
}),
('len', {
'aggregation': agate.MaxLength,
'label': 'Longest value: '
}),
('freq', {
'aggregation': None,
'label': 'Most common values: '
})
])
class CSVStat(CSVKitUtility):
description = 'Print descriptive statistics for each column in a CSV file.'
override_flags = ['L', 'blanks', 'date-format', 'datetime-format']
def add_arguments(self):
self.argparser.add_argument(
'--csv', dest='csv_output', action='store_true',
help='Output results as a CSV, rather than text.')
self.argparser.add_argument(
'-n', '--names', dest='names_only', action='store_true',
help='Display column names and indices from the input CSV and exit.')
self.argparser.add_argument(
'-c', '--columns', dest='columns',
help='A comma-separated list of column indices, names or ranges to be examined, e.g. "1,id,3-5". '
'Defaults to all columns.')
self.argparser.add_argument(
'--type', dest='type_only', action='store_true',
help='Only output data type.')
self.argparser.add_argument(
'--nulls', dest='nulls_only', action='store_true',
help='Only output whether columns contains nulls.')
self.argparser.add_argument(
'--unique', dest='unique_only', action='store_true',
help='Only output counts of unique values.')
self.argparser.add_argument(
'--min', dest='min_only', action='store_true',
help='Only output smallest values.')
self.argparser.add_argument(
'--max', dest='max_only', action='store_true',
help='Only output largest values.')
self.argparser.add_argument(
'--sum', dest='sum_only', action='store_true',
help='Only output sums.')
self.argparser.add_argument(
'--mean', dest='mean_only', action='store_true',
help='Only output means.')
self.argparser.add_argument(
'--median', dest='median_only', action='store_true',
help='Only output medians.')
self.argparser.add_argument(
'--stdev', dest='stdev_only', action='store_true',
help='Only output standard deviations.')
self.argparser.add_argument(
'--len', dest='len_only', action='store_true',
help='Only output the length of the longest values.')
self.argparser.add_argument(
'--freq', dest='freq_only', action='store_true',
help='Only output lists of frequent values.')
self.argparser.add_argument(
'--freq-count', dest='freq_count', type=int,
help='The maximum number of frequent values to display.')
self.argparser.add_argument(
'--count', dest='count_only', action='store_true',
help='Only output total row count.')
self.argparser.add_argument(
'-y', '--snifflimit', dest='sniff_limit', type=int, default=1024,
help='Limit CSV dialect sniffing to the specified number of bytes. '
'Specify "0" to disable sniffing entirely, or "-1" to sniff the entire file.')
def main(self):
if self.args.names_only:
self.print_column_names()
return
if self.additional_input_expected():
self.argparser.error('You must provide an input file or piped data.')
operations = [op for op in OPERATIONS.keys() if getattr(self.args, op + '_only')]
if len(operations) > 1:
self.argparser.error('Only one operation argument may be specified (--mean, --median, etc).')
if operations and self.args.csv_output:
self.argparser.error(
'You may not specify --csv and an operation (--mean, --median, etc) at the same time.')
if operations and self.args.count_only:
self.argparser.error(
'You may not specify --count and an operation (--mean, --median, etc) at the same time.')
if six.PY2:
self.output_file = codecs.getwriter('utf-8')(self.output_file)
if self.args.count_only:
count = len(list(agate.csv.reader(self.skip_lines(), **self.reader_kwargs)))
if not self.args.no_header_row:
count -= 1
self.output_file.write('%i\n' % count)
return
sniff_limit = self.args.sniff_limit if self.args.sniff_limit != -1 else None
table = agate.Table.from_csv(
self.input_file,
skip_lines=self.args.skip_lines,
sniff_limit=sniff_limit,
**self.reader_kwargs
)
column_ids = parse_column_identifiers(
self.args.columns,
table.column_names,
self.get_column_offset()
)
kwargs = {}
if self.args.freq_count:
kwargs['freq_count'] = self.args.freq_count
# Output a single stat
if operations:
if len(column_ids) == 1:
self.print_one(table, column_ids[0], operations[0], label=False, **kwargs)
else:
for column_id in column_ids:
self.print_one(table, column_id, operations[0], **kwargs)
else:
stats = {}
for column_id in column_ids:
stats[column_id] = self.calculate_stats(table, column_id, **kwargs)
# Output as CSV
if self.args.csv_output:
self.print_csv(table, column_ids, stats)
# Output all stats
else:
self.print_stats(table, column_ids, stats)
def is_finite_decimal(self, value):
return isinstance(value, Decimal) and value.is_finite()
def print_one(self, table, column_id, operation, label=True, **kwargs):
"""
Print data for a single statistic.
"""
column_name = table.column_names[column_id]
op_name = operation
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stat = getter(table, column_id, **kwargs)
else:
op = OPERATIONS[op_name]['aggregation']
stat = table.aggregate(op(column_id))
if self.is_finite_decimal(stat):
stat = format_decimal(stat)
except Exception:
stat = None
# Formatting
if op_name == 'freq':
stat = ', '.join([(u'"%s": %s' % (six.text_type(row['value']), row['count'])) for row in stat])
stat = u'{ %s }' % stat
if label:
self.output_file.write(u'%3i. %s: %s\n' % (column_id + 1, column_name, stat))
else:
self.output_file.write(u'%s\n' % stat)
def calculate_stats(self, table, column_id, **kwargs):
"""
Calculate stats for all valid operations.
"""
stats = {}
for op_name, op_data in OPERATIONS.items():
getter = globals().get('get_%s' % op_name, None)
with warnings.catch_warnings():
warnings.simplefilter('ignore', agate.NullCalculationWarning)
try:
if getter:
stats[op_name] = getter(table, column_id, **kwargs)
else:
op = op_data['aggregation']
v = table.aggregate(op(column_id))
if self.is_finite_decimal(v):
v = format_decimal(v)
stats[op_name] = v
except Exception:
stats[op_name] = None
return stats
def print_stats(self, table, column_ids, stats):
"""
Print data for all statistics.
"""
label_column_width = max([len(op_data['label']) for op_data in OPERATIONS.values()])
for column_id in column_ids:
column_name = table.column_names[column_id]
column = table.columns[column_id]
column_stats = stats[column_id]
self.output_file.write(('%3i. "%s"\n\n' % (column_id + 1, column_name)))
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
continue
label = u'{label:{label_column_width}}'.format(**{
'label_column_width': label_column_width,
'label': op_data['label']
})
if op_name == 'freq':
for i, row in enumerate(column_stats['freq']):
if i == 0:
self.output_file.write('\t{} '.format(label))
else:
self.output_file.write(u'\t{label:{label_column_width}} '.format(**{
'label_column_width': label_column_width,
'label': ''
}))
if isinstance(column.data_type, agate.Number):
v = row['value']
if self.is_finite_decimal(v):
v = format_decimal(v)
else:
v = six.text_type(row['value'])
self.output_file.write(u'{} ({}x)\n'.format(v, row['count']))
continue
v = column_stats[op_name]
if op_name == 'nulls' and v:
v = '%s (excluded from calculations)' % v
elif op_name == 'len':
v = '%s characters' % v
self.output_file.write(u'\t{} {}\n'.format(label, v))
self.output_file.write('\n')
self.output_file.write('Row count: %s\n' % len(table.rows))
def print_csv(self, table, column_ids, stats):
"""
Print data for all statistics as a csv table.
"""
writer = agate.csv.writer(self.output_file)
header = ['column_id', 'column_name'] + [op_name for op_name in OPERATIONS.keys()]
writer.writerow(header)
for column_id in column_ids:
column_name = table.column_names[column_id]
column_stats = stats[column_id]
output_row = [column_id + 1, column_name]
for op_name, op_data in OPERATIONS.items():
if column_stats[op_name] is None:
output_row.append(None)
continue
if op_name == 'freq':
value = ', '.join([six.text_type(row['value']) for row in column_stats['freq']])
else:
value = column_stats[op_name]
output_row.append(value)
writer.writerow(output_row)
def format_decimal(d):
return locale.format_string('%.3f', d, grouping=True).rstrip('0').rstrip('.')
def get_type(table, column_id, **kwargs):
return '%s' % table.columns[column_id].data_type.__class__.__name__
def get_unique(table, column_id, **kwargs):
return len(table.columns[column_id].values_distinct())
def get_freq(table, column_id, freq_count=5, **kwargs):
values = table.columns[column_id].values()
return [
{'value': r[0], 'count': r[1]}
for r in Counter(values).most_common(freq_count)
]
def launch_new_instance():
utility = CSVStat()
utility.run()
if __name__ == '__main__':
launch_new_instance()
| dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | The `babel` library has a large upfront cost. I re-implemented this function using python's `locale` library. | dannysepler | 11 |
wireservice/csvkit | 1,140 | Speedup `csvstat` by using a sniff-limit and faster `get_freq` implementation | I use `csvkit` a lot and absolutely love it. `csvkit` (and `csvstat` in particular) can be very slow on large CSVs, and I decided to look into it. I used [snakeviz](https://jiffyclub.github.io/snakeviz/) to poke around.
```bash
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2020-financial-year-provisional/Download-data/annual-enterprise-survey-2020-financial-year-provisional-csv.csv > nz.csv
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5743k 100 5743k 0 0 742k 0 0:00:07 0:00:07 --:--:-- 924k
# See performance on master (~24 seconds)
~/Code/csvkit (master) $ time csvstat nz.csv > before.txt
csvstat nz.csv > before.txt 24.07s user 0.20s system 98% cpu 24.591 total
# See performance on feature branch (~2.6 seconds)
~/Code/csvkit (master) $ git checkout sniff-limit
Switched to branch 'sniff-limit'
~/Code/csvkit (sniff-limit) $ time csvstat nz.csv > after.txt
csvstat nz.csv > after.txt 2.38s user 0.15s system 96% cpu 2.629 total
# Check contents are identical
~/Code/csvkit (sniff-limit) $ diff before.txt after.txt
~/Code/csvkit (sniff-limit) $
```
This results in a 10x speedup for very large CSVs! I'll write some notes by each change I'm making. | null | 2021-09-06 23:19:04+00:00 | 2021-09-14 21:29:58+00:00 | tests/test_utilities/test_csvclean.py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import six
try:
from mock import patch
except ImportError:
from unittest.mock import patch
from csvkit.utilities.csvclean import CSVClean, launch_new_instance
from tests.utils import CSVKitTestCase, EmptyFileTests
class TestCSVClean(CSVKitTestCase, EmptyFileTests):
Utility = CSVClean
def assertCleaned(self, basename, output_lines, error_lines, additional_args=[]):
args = ['examples/%s.csv' % basename] + additional_args
output_file = six.StringIO()
utility = CSVClean(args, output_file)
utility.run()
output_file.close()
output_file = 'examples/%s_out.csv' % basename
error_file = 'examples/%s_err.csv' % basename
self.assertEqual(os.path.exists(output_file), bool(output_lines))
self.assertEqual(os.path.exists(error_file), bool(error_lines))
try:
if output_lines:
with open(output_file) as f:
for line in output_lines:
self.assertEqual(next(f), line)
self.assertRaises(StopIteration, next, f)
if error_lines:
with open(error_file) as f:
for line in error_lines:
self.assertEqual(next(f), line)
self.assertRaises(StopIteration, next, f)
finally:
if output_lines:
os.remove(output_file)
if error_lines:
os.remove(error_file)
def test_launch_new_instance(self):
with patch.object(sys, 'argv', [self.Utility.__name__.lower(), 'examples/bad.csv']):
launch_new_instance()
def test_skip_lines(self):
self.assertCleaned('bad_skip_lines', [
'column_a,column_b,column_c\n',
'0,mixed types.... uh oh,17\n',
], [
'line_number,msg,column_a,column_b,column_c\n',
'1,"Expected 3 columns, found 4 columns",1,27,,I\'m too long!\n',
'2,"Expected 3 columns, found 2 columns",,I\'m too short!\n',
], ['--skip-lines', '3'])
def test_simple(self):
self.assertCleaned('bad', [
'column_a,column_b,column_c\n',
'0,mixed types.... uh oh,17\n',
], [
'line_number,msg,column_a,column_b,column_c\n',
'1,"Expected 3 columns, found 4 columns",1,27,,I\'m too long!\n',
'2,"Expected 3 columns, found 2 columns",,I\'m too short!\n',
])
def test_no_header_row(self):
self.assertCleaned('no_header_row', [
'1,2,3\n',
], [])
def test_removes_optional_quote_characters(self):
self.assertCleaned('optional_quote_characters', [
'a,b,c\n',
'1,2,3\n',
], [])
def test_changes_line_endings(self):
self.assertCleaned('mac_newlines', [
'a,b,c\n',
'1,2,3\n',
'"Once upon\n',
'a time",5,6\n',
], [])
def test_changes_character_encoding(self):
self.assertCleaned('test_latin1', [
'a,b,c\n',
'1,2,3\n',
'4,5,©\n',
], [], ['-e', 'latin1'])
def test_removes_bom(self):
self.assertCleaned('test_utf8_bom', [
'foo,bar,baz\n',
'1,2,3\n',
'4,5,ʤ\n',
], [], [])
def test_dry_run(self):
output = self.get_output_as_io(['-n', 'examples/bad.csv'])
self.assertFalse(os.path.exists('examples/bad_err.csv'))
self.assertFalse(os.path.exists('examples/bad_out.csv'))
self.assertEqual(next(output)[:6], 'Line 1')
self.assertEqual(next(output)[:6], 'Line 2')
| #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import six
try:
from mock import patch
except ImportError:
from unittest.mock import patch
from csvkit.utilities.csvclean import CSVClean, launch_new_instance
from tests.utils import CSVKitTestCase, EmptyFileTests
class TestCSVClean(CSVKitTestCase, EmptyFileTests):
Utility = CSVClean
def tearDown(self):
output_file = "stdin_out.csv"
if os.path.isfile(output_file):
os.remove(output_file)
def assertCleaned(self, basename, output_lines, error_lines, additional_args=[]):
args = ['examples/%s.csv' % basename] + additional_args
output_file = six.StringIO()
utility = CSVClean(args, output_file)
utility.run()
output_file.close()
output_file = 'examples/%s_out.csv' % basename
error_file = 'examples/%s_err.csv' % basename
self.assertEqual(os.path.exists(output_file), bool(output_lines))
self.assertEqual(os.path.exists(error_file), bool(error_lines))
try:
if output_lines:
with open(output_file) as f:
for line in output_lines:
self.assertEqual(next(f), line)
self.assertRaises(StopIteration, next, f)
if error_lines:
with open(error_file) as f:
for line in error_lines:
self.assertEqual(next(f), line)
self.assertRaises(StopIteration, next, f)
finally:
if output_lines:
os.remove(output_file)
if error_lines:
os.remove(error_file)
def test_launch_new_instance(self):
with patch.object(sys, 'argv', [self.Utility.__name__.lower(), 'examples/bad.csv']):
launch_new_instance()
def test_skip_lines(self):
self.assertCleaned('bad_skip_lines', [
'column_a,column_b,column_c\n',
'0,mixed types.... uh oh,17\n',
], [
'line_number,msg,column_a,column_b,column_c\n',
'1,"Expected 3 columns, found 4 columns",1,27,,I\'m too long!\n',
'2,"Expected 3 columns, found 2 columns",,I\'m too short!\n',
], ['--skip-lines', '3'])
def test_simple(self):
self.assertCleaned('bad', [
'column_a,column_b,column_c\n',
'0,mixed types.... uh oh,17\n',
], [
'line_number,msg,column_a,column_b,column_c\n',
'1,"Expected 3 columns, found 4 columns",1,27,,I\'m too long!\n',
'2,"Expected 3 columns, found 2 columns",,I\'m too short!\n',
])
def test_no_header_row(self):
self.assertCleaned('no_header_row', [
'1,2,3\n',
], [])
def test_removes_optional_quote_characters(self):
self.assertCleaned('optional_quote_characters', [
'a,b,c\n',
'1,2,3\n',
], [])
def test_changes_line_endings(self):
self.assertCleaned('mac_newlines', [
'a,b,c\n',
'1,2,3\n',
'"Once upon\n',
'a time",5,6\n',
], [])
def test_changes_character_encoding(self):
self.assertCleaned('test_latin1', [
'a,b,c\n',
'1,2,3\n',
'4,5,©\n',
], [], ['-e', 'latin1'])
def test_removes_bom(self):
self.assertCleaned('test_utf8_bom', [
'foo,bar,baz\n',
'1,2,3\n',
'4,5,ʤ\n',
], [], [])
def test_dry_run(self):
output = self.get_output_as_io(['-n', 'examples/bad.csv'])
self.assertFalse(os.path.exists('examples/bad_err.csv'))
self.assertFalse(os.path.exists('examples/bad_out.csv'))
self.assertEqual(next(output)[:6], 'Line 1')
self.assertEqual(next(output)[:6], 'Line 2')
| dannysepler | 2eff26b17a3016f7a137e8fac54e3af2c6521da8 | f2eb03c88cd3f57a8c76349914fb09db57e3c29e | this change is irrelevant, but running this test file leaves behind a `stdin_out.csv` file. just cleaning it up! | dannysepler | 12 |
bigcode-project/starcoder | 45 | Add hardware requirements section | null | null | 2023-05-25 16:50:25+00:00 | 2023-05-25 16:50:48+00:00 | README.md | # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [Chat](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground)
# What is this about?
💫 StarCoder is a language model (LM) trained on source code and natural language text. Its training data incorporates more that 80 different programming languages as well as text extracted from GitHub issues and commits and from notebooks. This repository showcases how we get an overview of this LM's capabilities.
# News
* **May 9, 2023:** We've fine-tuned StarCoder to act as a helpful coding assistant 💬! Check out the `chat/` directory for the training code and play with the model [here](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
# Disclaimer
Before you can use the model go to `hf.co/bigcode/starcoder` and accept the agreement. And make sure you are logged into the Hugging Face hub with:
```bash
huggingface-cli login
```
# Table of Contents
1. [Quickstart](#quickstart)
- [Installation](#installation)
- [Code generation with StarCoder](#code-generation)
- [Text-generation-inference code](#text-generation-inference)
2. [Fine-tuning](#fine-tuning)
- [Step by step installation with conda](#step-by-step-installation-with-conda)
- [Datasets](#datasets)
- [Stack Exchange](#stack-exchange-se)
- [Merging PEFT adapter layers](#merging-peft-adapter-layers)
# Quickstart
StarCoder was trained on GitHub code, thus it can be used to perform code generation. More precisely, the model can complete the implementation of a function or infer the following characters in a line of code. This can be done with the help of the 🤗's [transformers](https://github.com/huggingface/transformers) library.
## Installation
First, we have to install all the libraries listed in `requirements.txt`
```bash
pip install -r requirements.txt
```
## Code generation
The code generation pipeline is as follows
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# to save memory consider using fp16 or bf16 by specifying torch.dtype=torch.float16 for example
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
or
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
checkpoint = "bigcode/starcoder"
model = AutoModelForCausalLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
print( pipe("def hello():") )
```
## Text-generation-inference
```bash
docker run -p 8080:80 -v $PWD/data:/data -e HUGGING_FACE_HUB_TOKEN=<YOUR BIGCODE ENABLED TOKEN> -d ghcr.io/huggingface/text-generation-inference:latest --model-id bigcode/starcoder --max-total-tokens 8192
```
For more details, see [here](https://github.com/huggingface/text-generation-inference).
# Fine-tuning
Here, we showcase how we can fine-tune this LM on a specific downstream task.
## Step by step installation with conda
Create a new conda environment and activate it
```bash
conda create -n env
conda activate env
```
Install the `pytorch` version compatible with your version of cuda [here](https://pytorch.org/get-started/previous-versions/), for example the following command works with cuda 11.6
```bash
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```
Install `transformers` and `peft`
```bash
conda install -c huggingface transformers
pip install git+https://github.com/huggingface/peft.git
```
Note that you can install the latest stable version of transformers by using
```bash
pip install git+https://github.com/huggingface/transformers
```
Install `datasets`, `accelerate` and `huggingface_hub`
```bash
conda install -c huggingface -c conda-forge datasets
conda install -c conda-forge accelerate
conda install -c conda-forge huggingface_hub
```
Finally, install `bitsandbytes` and `wandb`
```bash
pip install bitsandbytes
pip install wandb
```
To get the full list of arguments with descriptions you can run the following command on any script:
```
python scripts/some_script.py --help
```
Before you run any of the scripts make sure you are logged in and can push to the hub:
```bash
huggingface-cli login
```
Make sure you are logged in `wandb`:
```bash
wandb login
```
Now that everything is done, you can clone the repository and get into the corresponding directory.
## Datasets
💫 StarCoder can be fine-tuned to achieve multiple downstream tasks. Our interest here is to fine-tune StarCoder in order to make it follow instructions. [Instruction fine-tuning](https://arxiv.org/pdf/2109.01652.pdf) has gained a lot of attention recently as it proposes a simple framework that teaches language models to align their outputs with human needs. That procedure requires the availability of quality instruction datasets, which contain multiple `instruction - answer` pairs. Unfortunately such datasets are not ubiquitous but thanks to Hugging Face 🤗's [datasets](https://github.com/huggingface/datasets) library we can have access to some good proxies. To fine-tune cheaply and efficiently, we use Hugging Face 🤗's [PEFT](https://github.com/huggingface/peft) as well as Tim Dettmers' [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
### Stack Exchange SE
[Stack Exchange](https://en.wikipedia.org/wiki/Stack_Exchange) is a well-known network of Q&A websites on topics in diverse fields. It is a place where a user can ask a question and obtain answers from other users. Those answers are scored and ranked based on their quality. [Stack exchange instruction](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) is a dataset that was obtained by scrapping the site in order to build a collection of Q&A pairs. A language model can then be fine-tuned on that dataset to make it elicit strong and diverse question-answering skills.
To execute the fine-tuning script run the following command:
```bash
python finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming\
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
The size of the SE dataset is better manageable when using streaming. We also have to precise the split of the dataset that is used. For more details, check the [dataset's page](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) on 🤗. Similarly we can modify the command to account for the availability of GPUs
```bash
python -m torch.distributed.launch \
--nproc_per_node number_of_gpus finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming \
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
## Merging PEFT adapter layers
If you train a model with PEFT, you'll need to merge the adapter layers with the base model if you want to run inference / evaluation. To do so, run:
```bash
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint
# Push merged model to the Hub
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint --push_to_hub
```
For example
```bash
python finetune/merge_peft_adapters.py --model_name_or_path bigcode/starcoder --peft_model_path checkpoints/checkpoint-1000 --push_to_hub
```
## Evaluation
To evaluate StarCoder and its derivatives, you can use the [BigCode-Evaluation-Harness](https://github.com/bigcode-project/bigcode-evaluation-harness) for evaluating Code LLMs.
| # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [Chat](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground)
# What is this about?
💫 StarCoder is a language model (LM) trained on source code and natural language text. Its training data incorporates more that 80 different programming languages as well as text extracted from GitHub issues and commits and from notebooks. This repository showcases how we get an overview of this LM's capabilities.
# News
* **May 9, 2023:** We've fine-tuned StarCoder to act as a helpful coding assistant 💬! Check out the `chat/` directory for the training code and play with the model [here](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
# Disclaimer
Before you can use the model go to `hf.co/bigcode/starcoder` and accept the agreement. And make sure you are logged into the Hugging Face hub with:
```bash
huggingface-cli login
```
# Table of Contents
1. [Quickstart](#quickstart)
- [Installation](#installation)
- [Code generation with StarCoder](#code-generation)
- [Text-generation-inference code](#text-generation-inference)
2. [Fine-tuning](#fine-tuning)
- [Step by step installation with conda](#step-by-step-installation-with-conda)
- [Datasets](#datasets)
- [Stack Exchange](#stack-exchange-se)
- [Merging PEFT adapter layers](#merging-peft-adapter-layers)
3. [Evaluation](#evaluation)
4. [Inference hardware requirements](#inference-hardware-requirements)
# Quickstart
StarCoder was trained on GitHub code, thus it can be used to perform code generation. More precisely, the model can complete the implementation of a function or infer the following characters in a line of code. This can be done with the help of the 🤗's [transformers](https://github.com/huggingface/transformers) library.
## Installation
First, we have to install all the libraries listed in `requirements.txt`
```bash
pip install -r requirements.txt
```
## Code generation
The code generation pipeline is as follows
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# to save memory consider using fp16 or bf16 by specifying torch.dtype=torch.float16 for example
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
or
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
checkpoint = "bigcode/starcoder"
model = AutoModelForCausalLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
print( pipe("def hello():") )
```
For hardware requirements, check the secyoon [Inference hardware requirements](#inference-hardware-requirements).
## Text-generation-inference
```bash
docker run -p 8080:80 -v $PWD/data:/data -e HUGGING_FACE_HUB_TOKEN=<YOUR BIGCODE ENABLED TOKEN> -d ghcr.io/huggingface/text-generation-inference:latest --model-id bigcode/starcoder --max-total-tokens 8192
```
For more details, see [here](https://github.com/huggingface/text-generation-inference).
# Fine-tuning
Here, we showcase how we can fine-tune this LM on a specific downstream task.
## Step by step installation with conda
Create a new conda environment and activate it
```bash
conda create -n env
conda activate env
```
Install the `pytorch` version compatible with your version of cuda [here](https://pytorch.org/get-started/previous-versions/), for example the following command works with cuda 11.6
```bash
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```
Install `transformers` and `peft`
```bash
conda install -c huggingface transformers
pip install git+https://github.com/huggingface/peft.git
```
Note that you can install the latest stable version of transformers by using
```bash
pip install git+https://github.com/huggingface/transformers
```
Install `datasets`, `accelerate` and `huggingface_hub`
```bash
conda install -c huggingface -c conda-forge datasets
conda install -c conda-forge accelerate
conda install -c conda-forge huggingface_hub
```
Finally, install `bitsandbytes` and `wandb`
```bash
pip install bitsandbytes
pip install wandb
```
To get the full list of arguments with descriptions you can run the following command on any script:
```
python scripts/some_script.py --help
```
Before you run any of the scripts make sure you are logged in and can push to the hub:
```bash
huggingface-cli login
```
Make sure you are logged in `wandb`:
```bash
wandb login
```
Now that everything is done, you can clone the repository and get into the corresponding directory.
## Datasets
💫 StarCoder can be fine-tuned to achieve multiple downstream tasks. Our interest here is to fine-tune StarCoder in order to make it follow instructions. [Instruction fine-tuning](https://arxiv.org/pdf/2109.01652.pdf) has gained a lot of attention recently as it proposes a simple framework that teaches language models to align their outputs with human needs. That procedure requires the availability of quality instruction datasets, which contain multiple `instruction - answer` pairs. Unfortunately such datasets are not ubiquitous but thanks to Hugging Face 🤗's [datasets](https://github.com/huggingface/datasets) library we can have access to some good proxies. To fine-tune cheaply and efficiently, we use Hugging Face 🤗's [PEFT](https://github.com/huggingface/peft) as well as Tim Dettmers' [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
### Stack Exchange SE
[Stack Exchange](https://en.wikipedia.org/wiki/Stack_Exchange) is a well-known network of Q&A websites on topics in diverse fields. It is a place where a user can ask a question and obtain answers from other users. Those answers are scored and ranked based on their quality. [Stack exchange instruction](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) is a dataset that was obtained by scrapping the site in order to build a collection of Q&A pairs. A language model can then be fine-tuned on that dataset to make it elicit strong and diverse question-answering skills.
To execute the fine-tuning script run the following command:
```bash
python finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming\
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
The size of the SE dataset is better manageable when using streaming. We also have to precise the split of the dataset that is used. For more details, check the [dataset's page](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) on 🤗. Similarly we can modify the command to account for the availability of GPUs
```bash
python -m torch.distributed.launch \
--nproc_per_node number_of_gpus finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming \
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
## Merging PEFT adapter layers
If you train a model with PEFT, you'll need to merge the adapter layers with the base model if you want to run inference / evaluation. To do so, run:
```bash
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint
# Push merged model to the Hub
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint --push_to_hub
```
For example
```bash
python finetune/merge_peft_adapters.py --model_name_or_path bigcode/starcoder --peft_model_path checkpoints/checkpoint-1000 --push_to_hub
```
# Evaluation
To evaluate StarCoder and its derivatives, you can use the [BigCode-Evaluation-Harness](https://github.com/bigcode-project/bigcode-evaluation-harness) for evaluating Code LLMs.
# Inference hardware requirements
In FP32 the model requires more than 60GB of RAM, you can load it in FP16 or BF16 in ~30GB, or in 8bit under 20GB of RAM with
```python
# make sure you have accelerate and bitsandbytes installed
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder")
# for fp16 replace with `load_in_8bit=True` with `torch_dtype=torch.float16`
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder", device_map="auto", load_in_8bit=True)
print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
````
```
Memory footprint: 15939.61 MB
```
You can also try [starcoder.cpp](https://github.com/bigcode-project/starcoder.cpp), a C++ implementation with [ggml](https://github.com/ggerganov/ggml) library.
| loubnabnl | 7a9f9dbab6dc60a001a07b05665e794ddee882de | 3b1b32b1c4b826c5003b05aef4f79be46a188e05 | small typo
`For hardware requirements, check the section [Inference hardware requirements](#inference-hardware-requirements).` | Vipitis | 0 |
bigcode-project/starcoder | 45 | Add hardware requirements section | null | null | 2023-05-25 16:50:25+00:00 | 2023-05-25 16:50:48+00:00 | README.md | # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [Chat](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground)
# What is this about?
💫 StarCoder is a language model (LM) trained on source code and natural language text. Its training data incorporates more that 80 different programming languages as well as text extracted from GitHub issues and commits and from notebooks. This repository showcases how we get an overview of this LM's capabilities.
# News
* **May 9, 2023:** We've fine-tuned StarCoder to act as a helpful coding assistant 💬! Check out the `chat/` directory for the training code and play with the model [here](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
# Disclaimer
Before you can use the model go to `hf.co/bigcode/starcoder` and accept the agreement. And make sure you are logged into the Hugging Face hub with:
```bash
huggingface-cli login
```
# Table of Contents
1. [Quickstart](#quickstart)
- [Installation](#installation)
- [Code generation with StarCoder](#code-generation)
- [Text-generation-inference code](#text-generation-inference)
2. [Fine-tuning](#fine-tuning)
- [Step by step installation with conda](#step-by-step-installation-with-conda)
- [Datasets](#datasets)
- [Stack Exchange](#stack-exchange-se)
- [Merging PEFT adapter layers](#merging-peft-adapter-layers)
# Quickstart
StarCoder was trained on GitHub code, thus it can be used to perform code generation. More precisely, the model can complete the implementation of a function or infer the following characters in a line of code. This can be done with the help of the 🤗's [transformers](https://github.com/huggingface/transformers) library.
## Installation
First, we have to install all the libraries listed in `requirements.txt`
```bash
pip install -r requirements.txt
```
## Code generation
The code generation pipeline is as follows
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# to save memory consider using fp16 or bf16 by specifying torch.dtype=torch.float16 for example
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
or
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
checkpoint = "bigcode/starcoder"
model = AutoModelForCausalLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
print( pipe("def hello():") )
```
## Text-generation-inference
```bash
docker run -p 8080:80 -v $PWD/data:/data -e HUGGING_FACE_HUB_TOKEN=<YOUR BIGCODE ENABLED TOKEN> -d ghcr.io/huggingface/text-generation-inference:latest --model-id bigcode/starcoder --max-total-tokens 8192
```
For more details, see [here](https://github.com/huggingface/text-generation-inference).
# Fine-tuning
Here, we showcase how we can fine-tune this LM on a specific downstream task.
## Step by step installation with conda
Create a new conda environment and activate it
```bash
conda create -n env
conda activate env
```
Install the `pytorch` version compatible with your version of cuda [here](https://pytorch.org/get-started/previous-versions/), for example the following command works with cuda 11.6
```bash
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```
Install `transformers` and `peft`
```bash
conda install -c huggingface transformers
pip install git+https://github.com/huggingface/peft.git
```
Note that you can install the latest stable version of transformers by using
```bash
pip install git+https://github.com/huggingface/transformers
```
Install `datasets`, `accelerate` and `huggingface_hub`
```bash
conda install -c huggingface -c conda-forge datasets
conda install -c conda-forge accelerate
conda install -c conda-forge huggingface_hub
```
Finally, install `bitsandbytes` and `wandb`
```bash
pip install bitsandbytes
pip install wandb
```
To get the full list of arguments with descriptions you can run the following command on any script:
```
python scripts/some_script.py --help
```
Before you run any of the scripts make sure you are logged in and can push to the hub:
```bash
huggingface-cli login
```
Make sure you are logged in `wandb`:
```bash
wandb login
```
Now that everything is done, you can clone the repository and get into the corresponding directory.
## Datasets
💫 StarCoder can be fine-tuned to achieve multiple downstream tasks. Our interest here is to fine-tune StarCoder in order to make it follow instructions. [Instruction fine-tuning](https://arxiv.org/pdf/2109.01652.pdf) has gained a lot of attention recently as it proposes a simple framework that teaches language models to align their outputs with human needs. That procedure requires the availability of quality instruction datasets, which contain multiple `instruction - answer` pairs. Unfortunately such datasets are not ubiquitous but thanks to Hugging Face 🤗's [datasets](https://github.com/huggingface/datasets) library we can have access to some good proxies. To fine-tune cheaply and efficiently, we use Hugging Face 🤗's [PEFT](https://github.com/huggingface/peft) as well as Tim Dettmers' [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
### Stack Exchange SE
[Stack Exchange](https://en.wikipedia.org/wiki/Stack_Exchange) is a well-known network of Q&A websites on topics in diverse fields. It is a place where a user can ask a question and obtain answers from other users. Those answers are scored and ranked based on their quality. [Stack exchange instruction](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) is a dataset that was obtained by scrapping the site in order to build a collection of Q&A pairs. A language model can then be fine-tuned on that dataset to make it elicit strong and diverse question-answering skills.
To execute the fine-tuning script run the following command:
```bash
python finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming\
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
The size of the SE dataset is better manageable when using streaming. We also have to precise the split of the dataset that is used. For more details, check the [dataset's page](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) on 🤗. Similarly we can modify the command to account for the availability of GPUs
```bash
python -m torch.distributed.launch \
--nproc_per_node number_of_gpus finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming \
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
## Merging PEFT adapter layers
If you train a model with PEFT, you'll need to merge the adapter layers with the base model if you want to run inference / evaluation. To do so, run:
```bash
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint
# Push merged model to the Hub
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint --push_to_hub
```
For example
```bash
python finetune/merge_peft_adapters.py --model_name_or_path bigcode/starcoder --peft_model_path checkpoints/checkpoint-1000 --push_to_hub
```
## Evaluation
To evaluate StarCoder and its derivatives, you can use the [BigCode-Evaluation-Harness](https://github.com/bigcode-project/bigcode-evaluation-harness) for evaluating Code LLMs.
| # 💫 StarCoder
[Paper](https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view) | [Model](https://huggingface.co/bigcode/starcoder) | [Playground](https://huggingface.co/spaces/bigcode/bigcode-playground) | [VSCode](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) | [Chat](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground)
# What is this about?
💫 StarCoder is a language model (LM) trained on source code and natural language text. Its training data incorporates more that 80 different programming languages as well as text extracted from GitHub issues and commits and from notebooks. This repository showcases how we get an overview of this LM's capabilities.
# News
* **May 9, 2023:** We've fine-tuned StarCoder to act as a helpful coding assistant 💬! Check out the `chat/` directory for the training code and play with the model [here](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
# Disclaimer
Before you can use the model go to `hf.co/bigcode/starcoder` and accept the agreement. And make sure you are logged into the Hugging Face hub with:
```bash
huggingface-cli login
```
# Table of Contents
1. [Quickstart](#quickstart)
- [Installation](#installation)
- [Code generation with StarCoder](#code-generation)
- [Text-generation-inference code](#text-generation-inference)
2. [Fine-tuning](#fine-tuning)
- [Step by step installation with conda](#step-by-step-installation-with-conda)
- [Datasets](#datasets)
- [Stack Exchange](#stack-exchange-se)
- [Merging PEFT adapter layers](#merging-peft-adapter-layers)
3. [Evaluation](#evaluation)
4. [Inference hardware requirements](#inference-hardware-requirements)
# Quickstart
StarCoder was trained on GitHub code, thus it can be used to perform code generation. More precisely, the model can complete the implementation of a function or infer the following characters in a line of code. This can be done with the help of the 🤗's [transformers](https://github.com/huggingface/transformers) library.
## Installation
First, we have to install all the libraries listed in `requirements.txt`
```bash
pip install -r requirements.txt
```
## Code generation
The code generation pipeline is as follows
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "bigcode/starcoder"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# to save memory consider using fp16 or bf16 by specifying torch.dtype=torch.float16 for example
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
or
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
checkpoint = "bigcode/starcoder"
model = AutoModelForCausalLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
print( pipe("def hello():") )
```
For hardware requirements, check the secyoon [Inference hardware requirements](#inference-hardware-requirements).
## Text-generation-inference
```bash
docker run -p 8080:80 -v $PWD/data:/data -e HUGGING_FACE_HUB_TOKEN=<YOUR BIGCODE ENABLED TOKEN> -d ghcr.io/huggingface/text-generation-inference:latest --model-id bigcode/starcoder --max-total-tokens 8192
```
For more details, see [here](https://github.com/huggingface/text-generation-inference).
# Fine-tuning
Here, we showcase how we can fine-tune this LM on a specific downstream task.
## Step by step installation with conda
Create a new conda environment and activate it
```bash
conda create -n env
conda activate env
```
Install the `pytorch` version compatible with your version of cuda [here](https://pytorch.org/get-started/previous-versions/), for example the following command works with cuda 11.6
```bash
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia
```
Install `transformers` and `peft`
```bash
conda install -c huggingface transformers
pip install git+https://github.com/huggingface/peft.git
```
Note that you can install the latest stable version of transformers by using
```bash
pip install git+https://github.com/huggingface/transformers
```
Install `datasets`, `accelerate` and `huggingface_hub`
```bash
conda install -c huggingface -c conda-forge datasets
conda install -c conda-forge accelerate
conda install -c conda-forge huggingface_hub
```
Finally, install `bitsandbytes` and `wandb`
```bash
pip install bitsandbytes
pip install wandb
```
To get the full list of arguments with descriptions you can run the following command on any script:
```
python scripts/some_script.py --help
```
Before you run any of the scripts make sure you are logged in and can push to the hub:
```bash
huggingface-cli login
```
Make sure you are logged in `wandb`:
```bash
wandb login
```
Now that everything is done, you can clone the repository and get into the corresponding directory.
## Datasets
💫 StarCoder can be fine-tuned to achieve multiple downstream tasks. Our interest here is to fine-tune StarCoder in order to make it follow instructions. [Instruction fine-tuning](https://arxiv.org/pdf/2109.01652.pdf) has gained a lot of attention recently as it proposes a simple framework that teaches language models to align their outputs with human needs. That procedure requires the availability of quality instruction datasets, which contain multiple `instruction - answer` pairs. Unfortunately such datasets are not ubiquitous but thanks to Hugging Face 🤗's [datasets](https://github.com/huggingface/datasets) library we can have access to some good proxies. To fine-tune cheaply and efficiently, we use Hugging Face 🤗's [PEFT](https://github.com/huggingface/peft) as well as Tim Dettmers' [bitsandbytes](https://github.com/TimDettmers/bitsandbytes).
### Stack Exchange SE
[Stack Exchange](https://en.wikipedia.org/wiki/Stack_Exchange) is a well-known network of Q&A websites on topics in diverse fields. It is a place where a user can ask a question and obtain answers from other users. Those answers are scored and ranked based on their quality. [Stack exchange instruction](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) is a dataset that was obtained by scrapping the site in order to build a collection of Q&A pairs. A language model can then be fine-tuned on that dataset to make it elicit strong and diverse question-answering skills.
To execute the fine-tuning script run the following command:
```bash
python finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming\
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
The size of the SE dataset is better manageable when using streaming. We also have to precise the split of the dataset that is used. For more details, check the [dataset's page](https://huggingface.co/datasets/ArmelR/stack-exchange-instruction) on 🤗. Similarly we can modify the command to account for the availability of GPUs
```bash
python -m torch.distributed.launch \
--nproc_per_node number_of_gpus finetune/finetune.py \
--model_path="bigcode/starcoder"\
--dataset_name="ArmelR/stack-exchange-instruction"\
--subset="data/finetune"\
--split="train"\
--size_valid_set 10000\
--streaming \
--seq_length 2048\
--max_steps 1000\
--batch_size 1\
--input_column_name="question"\
--output_column_name="response"\
--gradient_accumulation_steps 16\
--learning_rate 1e-4\
--lr_scheduler_type="cosine"\
--num_warmup_steps 100\
--weight_decay 0.05\
--output_dir="./checkpoints" \
```
## Merging PEFT adapter layers
If you train a model with PEFT, you'll need to merge the adapter layers with the base model if you want to run inference / evaluation. To do so, run:
```bash
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint
# Push merged model to the Hub
python finetune/merge_peft_adapters.py --base_model_name_or_path model_to_merge --peft_model_path model_checkpoint --push_to_hub
```
For example
```bash
python finetune/merge_peft_adapters.py --model_name_or_path bigcode/starcoder --peft_model_path checkpoints/checkpoint-1000 --push_to_hub
```
# Evaluation
To evaluate StarCoder and its derivatives, you can use the [BigCode-Evaluation-Harness](https://github.com/bigcode-project/bigcode-evaluation-harness) for evaluating Code LLMs.
# Inference hardware requirements
In FP32 the model requires more than 60GB of RAM, you can load it in FP16 or BF16 in ~30GB, or in 8bit under 20GB of RAM with
```python
# make sure you have accelerate and bitsandbytes installed
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder")
# for fp16 replace with `load_in_8bit=True` with `torch_dtype=torch.float16`
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder", device_map="auto", load_in_8bit=True)
print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
````
```
Memory footprint: 15939.61 MB
```
You can also try [starcoder.cpp](https://github.com/bigcode-project/starcoder.cpp), a C++ implementation with [ggml](https://github.com/ggerganov/ggml) library.
| loubnabnl | 7a9f9dbab6dc60a001a07b05665e794ddee882de | 3b1b32b1c4b826c5003b05aef4f79be46a188e05 | Thanks! | loubnabnl | 1 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
@SuppressWarnings("FutureReturnValueIgnored")
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
ChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
Throwable cause = monoChannelPromise.cause();
if (cause != null) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
}
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
MonoChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | Can we make this configurable? (e.g. `closeOnFailure`) | violetagg | 0 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
@SuppressWarnings("FutureReturnValueIgnored")
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
ChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
Throwable cause = monoChannelPromise.cause();
if (cause != null) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
}
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
MonoChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | As this is a bugfix, I don't see a reason why one should disable this.
I also checked all code paths that call `tryFailure` and all of them close the channel already except the bind call.
Also duplicate calls to close do not do any harm as they are guarded in netty and only the first one actually does something. | SgtSilvio | 1 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
@SuppressWarnings("FutureReturnValueIgnored")
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
ChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
Throwable cause = monoChannelPromise.cause();
if (cause != null) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
}
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
MonoChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | If you don't like this at this place, as an alternative we could move the `channel.close()` only to the `TransportConnector.bind` method. Imho it would be safer to do it here as it avoid the same problem also for other code paths that fail during channel initialization. | SgtSilvio | 2 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
@SuppressWarnings("FutureReturnValueIgnored")
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
ChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
Throwable cause = monoChannelPromise.cause();
if (cause != null) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
}
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
MonoChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | We can do it without flag but you need to handle it a bit different. See https://github.com/reactor/reactor-netty/blob/main/reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java#L297-L309 | violetagg | 3 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
@SuppressWarnings("FutureReturnValueIgnored")
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
ChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
Throwable cause = monoChannelPromise.cause();
if (cause != null) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
}
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
MonoChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | I unified the closing of the channel now in one place, I hope this is what you intended. | SgtSilvio | 4 |
reactor/reactor-netty | 2,844 | Fix memory leak of HTTP server on bind failure | Fix issue https://github.com/reactor/reactor-netty/issues/2843 by closing channel on bind (and other) exception | null | 2023-06-27 20:02:13+00:00 | 2023-06-29 08:10:58+00:00 | reactor-netty-core/src/main/java/reactor/netty/transport/TransportConnector.java | /*
* Copyright (c) 2020-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
@SuppressWarnings("FutureReturnValueIgnored")
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
ChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
Throwable cause = monoChannelPromise.cause();
if (cause != null) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
}
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.transport;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.ChannelPromise;
import io.netty.channel.DefaultChannelPromise;
import io.netty.channel.EventLoop;
import io.netty.channel.unix.DomainSocketAddress;
import io.netty.resolver.AddressResolver;
import io.netty.resolver.AddressResolverGroup;
import io.netty.util.AttributeKey;
import io.netty.util.concurrent.Future;
import io.netty.util.concurrent.FutureListener;
import io.netty.util.concurrent.GenericFutureListener;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import reactor.util.context.ContextView;
import reactor.util.retry.Retry;
import java.net.SocketAddress;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.function.Predicate;
import java.util.function.Supplier;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.ReactorNetty.setChannelContext;
/**
* {@link TransportConnector} is a helper class that creates, initializes and registers the channel.
* It performs the actual connect operation to the remote peer or binds the channel.
*
* @author Stephane Maldini
* @author Violeta Georgieva
* @since 1.0.0
*/
public final class TransportConnector {
TransportConnector() {}
/**
* Binds a {@link Channel}.
*
* @param config the transport configuration
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param bindAddress the local address
* @param isDomainSocket true if {@link io.netty.channel.unix.DomainSocketChannel} or
* {@link io.netty.channel.unix.ServerDomainSocketChannel} is needed, false otherwise
* @return a {@link Mono} of {@link Channel}
*/
@SuppressWarnings("FutureReturnValueIgnored")
public static Mono<Channel> bind(TransportConfig config, ChannelInitializer<Channel> channelInitializer,
SocketAddress bindAddress, boolean isDomainSocket) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(bindAddress, "bindAddress");
Objects.requireNonNull(channelInitializer, "channelInitializer");
return doInitAndRegister(config, channelInitializer, isDomainSocket, config.eventLoopGroup().next())
.flatMap(channel -> {
MonoChannelPromise promise = new MonoChannelPromise(channel);
// "FutureReturnValueIgnored" this is deliberate
channel.eventLoop().execute(() -> channel.bind(bindAddress, promise.unvoid()));
return promise;
});
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, ContextView contextView) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, config.eventLoopGroup().next(), contextView);
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @return a {@link Mono} of {@link Channel}
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop) {
return connect(config, remoteAddress, resolverGroup, channelInitializer, eventLoop, Context.empty());
}
/**
* Connect a {@link Channel} to the remote peer.
*
* @param config the transport configuration
* @param remoteAddress the {@link SocketAddress} to connect to
* @param resolverGroup the resolver which will resolve the address of the unresolved named address
* @param channelInitializer the {@link ChannelInitializer} that will be used for initializing the channel pipeline
* @param eventLoop the {@link EventLoop} to use for handling the channel.
* @param contextView the current {@link ContextView}
* @return a {@link Mono} of {@link Channel}
* @since 1.0.26
*/
public static Mono<Channel> connect(TransportConfig config, SocketAddress remoteAddress,
AddressResolverGroup<?> resolverGroup, ChannelInitializer<Channel> channelInitializer, EventLoop eventLoop,
ContextView contextView) {
Objects.requireNonNull(config, "config");
Objects.requireNonNull(remoteAddress, "remoteAddress");
Objects.requireNonNull(resolverGroup, "resolverGroup");
Objects.requireNonNull(channelInitializer, "channelInitializer");
Objects.requireNonNull(eventLoop, "eventLoop");
Objects.requireNonNull(contextView, "contextView");
boolean isDomainAddress = remoteAddress instanceof DomainSocketAddress;
return doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(channel -> doResolveAndConnect(channel, config, remoteAddress, resolverGroup, contextView)
.onErrorResume(RetryConnectException.class,
t -> {
AtomicInteger index = new AtomicInteger(1);
return Mono.defer(() ->
doInitAndRegister(config, channelInitializer, isDomainAddress, eventLoop)
.flatMap(ch -> {
MonoChannelPromise mono = new MonoChannelPromise(ch);
doConnect(t.addresses, config.bindAddress(), mono, index.get());
return mono;
}))
.retryWhen(Retry.max(t.addresses.size() - 1)
.filter(RETRY_PREDICATE)
.doBeforeRetry(sig -> index.incrementAndGet()));
}));
}
/**
* Set the channel attributes
*
* @param channel the channel
* @param attrs the attributes
*/
@SuppressWarnings("unchecked")
static void setAttributes(Channel channel, Map<AttributeKey<?>, ?> attrs) {
for (Map.Entry<AttributeKey<?>, ?> e : attrs.entrySet()) {
channel.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
}
/**
* Set the channel options
*
* @param channel the channel
* @param options the options
*/
@SuppressWarnings("unchecked")
static void setChannelOptions(Channel channel, Map<ChannelOption<?>, ?> options, boolean isDomainSocket) {
for (Map.Entry<ChannelOption<?>, ?> e : options.entrySet()) {
if (isDomainSocket &&
(ChannelOption.SO_REUSEADDR.equals(e.getKey()) || ChannelOption.TCP_NODELAY.equals(e.getKey()))) {
continue;
}
try {
if (!channel.config().setOption((ChannelOption<Object>) e.getKey(), e.getValue())) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Unknown channel option '{}' for channel '{}'"), e.getKey(), channel);
}
}
}
catch (Throwable t) {
if (log.isWarnEnabled()) {
log.warn(format(channel, "Failed to set channel option '{}' with value '{}' for channel '{}'"),
e.getKey(), e.getValue(), channel, t);
}
}
}
}
static void doConnect(
List<SocketAddress> addresses,
@Nullable Supplier<? extends SocketAddress> bindAddress,
MonoChannelPromise connectPromise,
int index) {
Channel channel = connectPromise.channel();
channel.eventLoop().execute(() -> {
SocketAddress remoteAddress = addresses.get(index);
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connecting to [" + remoteAddress + "]."));
}
ChannelFuture f;
if (bindAddress == null) {
f = channel.connect(remoteAddress);
}
else {
SocketAddress local = Objects.requireNonNull(bindAddress.get(), "bindAddress");
f = channel.connect(remoteAddress, local);
}
f.addListener(future -> {
if (future.isSuccess()) {
connectPromise.setSuccess();
}
else {
Throwable cause = future.cause();
if (log.isDebugEnabled()) {
log.debug(format(channel, "Connect attempt to [" + remoteAddress + "] failed."), cause);
}
int next = index + 1;
if (next < addresses.size()) {
connectPromise.setFailure(new RetryConnectException(addresses));
}
else {
connectPromise.setFailure(cause);
}
}
});
});
}
@SuppressWarnings("FutureReturnValueIgnored")
static Mono<Channel> doInitAndRegister(
TransportConfig config,
ChannelInitializer<Channel> channelInitializer,
boolean isDomainSocket,
EventLoop eventLoop) {
ChannelFactory<? extends Channel> channelFactory = config.connectionFactory(config.eventLoopGroup(), isDomainSocket);
Channel channel = null;
try {
channel = channelFactory.newChannel();
if (channelInitializer instanceof ServerTransport.AcceptorInitializer) {
((ServerTransport.AcceptorInitializer) channelInitializer).acceptor.enableAutoReadTask(channel);
}
channel.pipeline().addLast(channelInitializer);
setChannelOptions(channel, config.options, isDomainSocket);
setAttributes(channel, config.attrs);
}
catch (Throwable t) {
if (channel != null) {
channel.unsafe().closeForcibly();
}
return Mono.error(t);
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
channel.unsafe().register(eventLoop, monoChannelPromise);
return monoChannelPromise;
}
@SuppressWarnings({"unchecked", "FutureReturnValueIgnored", "try"})
static Mono<Channel> doResolveAndConnect(Channel channel, TransportConfig config,
SocketAddress remoteAddress, AddressResolverGroup<?> resolverGroup, ContextView contextView) {
try {
AddressResolver<SocketAddress> resolver;
try {
resolver = (AddressResolver<SocketAddress>) resolverGroup.getResolver(channel.eventLoop());
}
catch (Throwable t) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(t);
}
if (!contextView.isEmpty()) {
setChannelContext(channel, contextView);
}
Supplier<? extends SocketAddress> bindAddress = config.bindAddress();
if (!resolver.isSupported(remoteAddress) || resolver.isResolved(remoteAddress)) {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(Collections.singletonList(remoteAddress), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolve != null) {
clientTransportConfig.doOnResolve.accept(Connection.from(channel));
}
}
Future<List<SocketAddress>> resolveFuture;
if (resolver instanceof MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver) {
resolveFuture = ((MicrometerAddressResolverGroupMetrics.MicrometerDelegatingAddressResolver<SocketAddress>) resolver)
.resolveAll(remoteAddress, contextView);
}
else {
resolveFuture = resolver.resolveAll(remoteAddress);
}
if (config instanceof ClientTransportConfig) {
final ClientTransportConfig<?> clientTransportConfig = (ClientTransportConfig<?>) config;
if (clientTransportConfig.doOnResolveError != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
clientTransportConfig.doOnResolveError.accept(Connection.from(channel), future.cause());
}
});
}
if (clientTransportConfig.doAfterResolve != null) {
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.isSuccess()) {
clientTransportConfig.doAfterResolve.accept(Connection.from(channel), future.getNow().get(0));
}
});
}
}
if (resolveFuture.isDone()) {
Throwable cause = resolveFuture.cause();
if (cause != null) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
return Mono.error(cause);
}
else {
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
doConnect(resolveFuture.getNow(), bindAddress, monoChannelPromise, 0);
return monoChannelPromise;
}
}
MonoChannelPromise monoChannelPromise = new MonoChannelPromise(channel);
resolveFuture.addListener((FutureListener<List<SocketAddress>>) future -> {
if (future.cause() != null) {
monoChannelPromise.tryFailure(future.cause());
}
else {
doConnect(future.getNow(), bindAddress, monoChannelPromise, 0);
}
});
return monoChannelPromise;
}
catch (Throwable t) {
return Mono.error(t);
}
}
static final class MonoChannelPromise extends Mono<Channel> implements ChannelPromise, Subscription {
final Channel channel;
CoreSubscriber<? super Channel> actual;
MonoChannelPromise(Channel channel) {
this.channel = channel;
}
@Override
public ChannelPromise addListener(GenericFutureListener<? extends Future<? super Void>> listener) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise addListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise await() {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean await(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise awaitUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeoutMillis) {
throw new UnsupportedOperationException();
}
@Override
public boolean awaitUninterruptibly(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void cancel() {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
@Override
public boolean cancel(boolean mayInterruptIfRunning) {
return false;
}
@Override
public Throwable cause() {
Object result = this.result;
return result == SUCCESS ? null : (Throwable) result;
}
@Override
public Channel channel() {
return channel;
}
@Override
public Void get() {
throw new UnsupportedOperationException();
}
@Override
public Void get(long timeout, TimeUnit unit) {
throw new UnsupportedOperationException();
}
@Override
public Void getNow() {
throw new UnsupportedOperationException();
}
@Override
public boolean isCancellable() {
return false;
}
@Override
public boolean isCancelled() {
return false;
}
@Override
public boolean isDone() {
Object result = this.result;
return result != null;
}
@Override
public boolean isSuccess() {
Object result = this.result;
return result == SUCCESS;
}
@Override
public boolean isVoid() {
return false;
}
@Override
public ChannelPromise removeListener(GenericFutureListener<? extends Future<? super Void>> listener) {
return this;
}
@Override
@SuppressWarnings("unchecked")
public ChannelPromise removeListeners(GenericFutureListener<? extends Future<? super Void>>... listeners) {
return this;
}
@Override
public void request(long n) {
// noop
}
@Override
public ChannelPromise setFailure(Throwable cause) {
tryFailure(cause);
return this;
}
@Override
public ChannelPromise setSuccess() {
trySuccess(null);
return this;
}
@Override
public ChannelPromise setSuccess(Void result) {
trySuccess(null);
return this;
}
@Override
public boolean setUncancellable() {
return true;
}
@Override
public void subscribe(CoreSubscriber<? super Channel> actual) {
EventLoop eventLoop = channel.eventLoop();
if (eventLoop.inEventLoop()) {
_subscribe(actual);
}
else {
eventLoop.execute(() -> _subscribe(actual));
}
}
@Override
public ChannelPromise sync() {
throw new UnsupportedOperationException();
}
@Override
public ChannelPromise syncUninterruptibly() {
throw new UnsupportedOperationException();
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public boolean tryFailure(Throwable cause) {
if (RESULT_UPDATER.compareAndSet(this, null, cause)) {
if (channel.isRegistered()) {
// "FutureReturnValueIgnored" this is deliberate
channel.close();
}
else {
channel.unsafe().closeForcibly();
}
if (actual != null) {
actual.onError(cause);
}
return true;
}
return false;
}
@Override
public boolean trySuccess() {
return trySuccess(null);
}
@Override
public boolean trySuccess(Void result) {
if (RESULT_UPDATER.compareAndSet(this, null, SUCCESS)) {
if (actual != null) {
actual.onNext(channel);
actual.onComplete();
}
return true;
}
return false;
}
@Override
public ChannelPromise unvoid() {
return new DefaultChannelPromise(channel) {
@Override
public ChannelPromise setSuccess(Void result) {
super.trySuccess(null);
MonoChannelPromise.this.trySuccess(null);
return this;
}
@Override
public boolean trySuccess(Void result) {
super.trySuccess(null);
return MonoChannelPromise.this.trySuccess(null);
}
@Override
public ChannelPromise setFailure(Throwable cause) {
super.tryFailure(cause);
MonoChannelPromise.this.tryFailure(cause);
return this;
}
@Override
public boolean tryFailure(Throwable cause) {
super.tryFailure(cause);
return MonoChannelPromise.this.tryFailure(cause);
}
};
}
void _subscribe(CoreSubscriber<? super Channel> actual) {
this.actual = actual;
actual.onSubscribe(this);
if (isDone()) {
if (isSuccess()) {
actual.onNext(channel);
actual.onComplete();
}
else {
actual.onError(cause());
}
}
}
static final Object SUCCESS = new Object();
static final AtomicReferenceFieldUpdater<MonoChannelPromise, Object> RESULT_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(MonoChannelPromise.class, Object.class, "result");
volatile Object result;
}
static final class RetryConnectException extends RuntimeException {
final List<SocketAddress> addresses;
RetryConnectException(List<SocketAddress> addresses) {
this.addresses = addresses;
}
@Override
public synchronized Throwable fillInStackTrace() {
// omit stacktrace for this exception
return this;
}
private static final long serialVersionUID = -207274323623692199L;
}
static final Logger log = Loggers.getLogger(TransportConnector.class);
static final Predicate<Throwable> RETRY_PREDICATE = t -> t instanceof RetryConnectException;
}
| SgtSilvio | c48f6a1bdb3a99b5ba8580eb0d2a19bca55be9d2 | 625633ec2abe82c0a07166213d0eea9c6c6e022b | yes thanks | violetagg | 5 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Specifies the maximum duration allowed between each network-level read operation while reading a given request
* content (resolution: ms). In other words, {@link io.netty.handler.timeout.ReadTimeoutHandler} is added to the
* channel pipeline after all the request headers are received, and removed from the channel pipeline after the
* content is fully received.
* If the {@code readTimeout} is {@code null}, any previous setting will be removed and no
* {@code readTimeout} will be applied.
* If the {@code readTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code readTimeout}.
*
* @param readTimeout the maximum duration allowed between each network-level read operation while reading a given
* request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
* @see io.netty.handler.timeout.ReadTimeoutHandler
*/
public final HttpServer readTimeout(@Nullable Duration readTimeout) {
if (Objects.equals(readTimeout, configuration().readTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().readTimeout = readTimeout;
return dup;
}
/**
* Specifies the maximum duration for reading a given request content (resolution: ms).
* If the {@code requestTimeout} is {@code null}, any previous setting will be removed and no
* {@code requestTimeout} will be applied.
* If the {@code requestTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code requestTimeout}.
*
* @param requestTimeout the maximum duration for reading a given request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
*/
public final HttpServer requestTimeout(@Nullable Duration requestTimeout) {
if (Objects.equals(requestTimeout, configuration().requestTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().requestTimeout = requestTimeout;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | > "once the request headers are received"
It's not clear to me from this description if the readTimeout takes effect _while_ reading headers, or only takes effect _after_ the headers are completely read, while reading the body.
Perhaps... "a ReadTimeoutHandler is added to the channel pipeline when staring to read a request, and removed from the channel pipeline after finishing reading the full request."
| philsttr | 6 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Specifies the maximum duration allowed between each network-level read operation while reading a given request
* content (resolution: ms). In other words, {@link io.netty.handler.timeout.ReadTimeoutHandler} is added to the
* channel pipeline after all the request headers are received, and removed from the channel pipeline after the
* content is fully received.
* If the {@code readTimeout} is {@code null}, any previous setting will be removed and no
* {@code readTimeout} will be applied.
* If the {@code readTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code readTimeout}.
*
* @param readTimeout the maximum duration allowed between each network-level read operation while reading a given
* request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
* @see io.netty.handler.timeout.ReadTimeoutHandler
*/
public final HttpServer readTimeout(@Nullable Duration readTimeout) {
if (Objects.equals(readTimeout, configuration().readTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().readTimeout = readTimeout;
return dup;
}
/**
* Specifies the maximum duration for reading a given request content (resolution: ms).
* If the {@code requestTimeout} is {@code null}, any previous setting will be removed and no
* {@code requestTimeout} will be applied.
* If the {@code requestTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code requestTimeout}.
*
* @param requestTimeout the maximum duration for reading a given request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
*/
public final HttpServer requestTimeout(@Nullable Duration requestTimeout) {
if (Objects.equals(requestTimeout, configuration().requestTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().requestTimeout = requestTimeout;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | The `idleTimeout` stops when we have request with headers, the `readTimeout` starts when we start reading the content. | violetagg | 7 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Specifies the maximum duration allowed between each network-level read operation while reading a given request
* content (resolution: ms). In other words, {@link io.netty.handler.timeout.ReadTimeoutHandler} is added to the
* channel pipeline after all the request headers are received, and removed from the channel pipeline after the
* content is fully received.
* If the {@code readTimeout} is {@code null}, any previous setting will be removed and no
* {@code readTimeout} will be applied.
* If the {@code readTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code readTimeout}.
*
* @param readTimeout the maximum duration allowed between each network-level read operation while reading a given
* request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
* @see io.netty.handler.timeout.ReadTimeoutHandler
*/
public final HttpServer readTimeout(@Nullable Duration readTimeout) {
if (Objects.equals(readTimeout, configuration().readTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().readTimeout = readTimeout;
return dup;
}
/**
* Specifies the maximum duration for reading a given request content (resolution: ms).
* If the {@code requestTimeout} is {@code null}, any previous setting will be removed and no
* {@code requestTimeout} will be applied.
* If the {@code requestTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code requestTimeout}.
*
* @param requestTimeout the maximum duration for reading a given request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
*/
public final HttpServer requestTimeout(@Nullable Duration requestTimeout) {
if (Objects.equals(requestTimeout, configuration().requestTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().requestTimeout = requestTimeout;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | Ah, ok. In that case, I'd change "once" to "after"
"In other words, {@link io.netty.handler.timeout.ReadTimeoutHandler} is added to the channel pipeline after all the request headers are received, and removed from the channel pipeline after the content is fully received." | philsttr | 8 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServer.java | /*
* Copyright (c) 2011-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.net.SocketAddress;
import java.time.Duration;
import java.util.Objects;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import io.netty.channel.group.ChannelGroup;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.ssl.JdkSslContext;
import io.netty.handler.ssl.OpenSsl;
import io.netty.handler.ssl.SslContext;
import io.netty.handler.ssl.util.SelfSignedCertificate;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.channel.ChannelMetricsRecorder;
import reactor.netty.http.Http2SettingsSpec;
import reactor.netty.http.HttpProtocol;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.logging.ReactorNettyHttpMessageLogFactory;
import reactor.netty.http.server.logging.AccessLog;
import reactor.netty.http.server.logging.AccessLogArgProvider;
import reactor.netty.http.server.logging.AccessLogFactory;
import reactor.netty.internal.util.Metrics;
import reactor.netty.tcp.SslProvider;
import reactor.netty.tcp.TcpServer;
import reactor.netty.transport.ServerTransport;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static reactor.netty.ReactorNetty.format;
/**
* An HttpServer allows building in a safe immutable way an HTTP server that is
* materialized and connecting when {@link #bind()} is ultimately called.
* <p>
* <p>Examples:
* <pre>
* {@code
* HttpServer.create()
* .host("0.0.0.0")
* .handle((req, res) -> res.sendString(Flux.just("hello")))
* .bind()
* .block();
* }
* </pre>
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
public abstract class HttpServer extends ServerTransport<HttpServer, HttpServerConfig> {
/**
* Prepare an {@link HttpServer}
*
* @return a new {@link HttpServer}
*/
public static HttpServer create() {
return HttpServerBind.INSTANCE;
}
/**
* Prepare an {@link HttpServer}
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.from(...)' method</p>
* <pre>
* {@code
* HttpServer.from(
* TcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .childAttr(...) // configures the child channel attributes
* .childObserve() // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .channelGroup(...) // configures the channel group
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .metrics(...) // configures the metrics
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
public static HttpServer from(TcpServer tcpServer) {
Objects.requireNonNull(tcpServer, "tcpServer");
return HttpServerBind.applyTcpServerConfig(tcpServer.configuration());
}
/**
* Enable or disable the access log. If enabled, the default log system will be used.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true)
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable) {
HttpServer dup = duplicate();
dup.configuration().accessLog = null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Enable or disable the access log and customize it through an {@link AccessLogFactory}.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(true, AccessLogFactory.createFilter(
* args -> String.valueOf(args.uri()).startsWith("/health"),
* args -> AccessLog.create("user-agent={}", args.requestHeader("user-agent"))
* )
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
* The {@link AccessLogFactory} class offers several helper methods to generate such a function,
* notably if one wants to {@link AccessLogFactory#createFilter(Predicate) filter} some requests out of the access log.
*
* Note that this method takes precedence over the {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
*
* @param enable enable or disable the access log
* @param accessLogFactory the {@link AccessLogFactory} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.3
*/
public final HttpServer accessLog(boolean enable, AccessLogFactory accessLogFactory) {
Objects.requireNonNull(accessLogFactory);
HttpServer dup = duplicate();
dup.configuration().accessLog = enable ? accessLogFactory : null;
dup.configuration().accessLogEnabled = enable;
return dup;
}
/**
* Customize the access log, provided access logging has been enabled through the
* {@value reactor.netty.ReactorNetty#ACCESS_LOG_ENABLED} system property.
* <p>
* Example:
* <pre>
* {@code
* HttpServer.create()
* .port(8080)
* .route(r -> r.get("/hello",
* (req, res) -> res.header(CONTENT_TYPE, TEXT_PLAIN)
* .sendString(Mono.just("Hello World!"))))
* .accessLog(argProvider ->
* AccessLog.create("user-agent={}", argProvider.requestHeader("user-agent")))
* .bindNow()
* .onDispose()
* .block();
* }
* </pre>
* <p>
*
* @param accessLogFactory the {@link Function} that creates an {@link AccessLog} given an {@link AccessLogArgProvider}
* @return a new {@link HttpServer}
* @since 1.0.1
* @deprecated as of 1.0.3. Prefer the {@link #accessLog(boolean, AccessLogFactory) variant}
* with the {@link AccessLogFactory} interface instead. This method will be removed in version 1.2.0.
*/
@Deprecated
public final HttpServer accessLog(Function<AccessLogArgProvider, AccessLog> accessLogFactory) {
Objects.requireNonNull(accessLogFactory, "accessLogFactory");
HttpServer dup = duplicate();
dup.configuration().accessLog = accessLogFactory;
return dup;
}
@Override
public final HttpServer bindAddress(Supplier<? extends SocketAddress> bindAddressSupplier) {
return super.bindAddress(bindAddressSupplier);
}
@Override
public final HttpServer channelGroup(ChannelGroup channelGroup) {
return super.channelGroup(channelGroup);
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers and the provided {@link java.util.function.Predicate} matches.
* <p>
* Note: the passed {@link HttpServerRequest} and {@link HttpServerResponse}
* should be considered read-only and the implement SHOULD NOT consume or
* write the request/response in this predicate.
* </p>
*
* @param predicate that returns true to compress the response.
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(BiPredicate<HttpServerRequest, HttpServerResponse> predicate) {
Objects.requireNonNull(predicate, "compressionPredicate");
HttpServer dup = duplicate();
dup.configuration().compressPredicate = predicate;
return dup;
}
/**
* Specifies whether GZip response compression is enabled if the client request
* presents accept encoding.
*
* @param compressionEnabled if true GZip response compression
* is enabled if the client request presents accept encoding, otherwise disabled.
* @return a new {@link HttpServer}
*/
public final HttpServer compress(boolean compressionEnabled) {
HttpServer dup = duplicate();
if (compressionEnabled) {
dup.configuration().minCompressionSize = 0;
}
else {
dup.configuration().minCompressionSize = -1;
dup.configuration().compressPredicate = null;
}
return dup;
}
/**
* Enable GZip response compression if the client request presents accept encoding
* headers AND the response reaches a minimum threshold
*
* @param minResponseSize compression is performed once response size exceeds the given
* value in bytes
*
* @return a new {@link HttpServer}
*/
public final HttpServer compress(int minResponseSize) {
if (minResponseSize < 0) {
throw new IllegalArgumentException("minResponseSize must be positive");
}
HttpServer dup = duplicate();
dup.configuration().minCompressionSize = minResponseSize;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder}; {@link ServerCookieDecoder} will be
* chosen based on the encoder
*
* @param encoder the preferred ServerCookieEncoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder) {
Objects.requireNonNull(encoder, "encoder");
ServerCookieDecoder decoder = encoder == ServerCookieEncoder.LAX ?
ServerCookieDecoder.LAX : ServerCookieDecoder.STRICT;
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Configure the
* {@link ServerCookieEncoder} and {@link ServerCookieDecoder}
*
* @param encoder the preferred ServerCookieEncoder
* @param decoder the preferred ServerCookieDecoder
*
* @return a new {@link HttpServer}
* @deprecated as of 1.1.0. This will be removed in 2.0.0 as Netty 5 supports only strict validation.
*/
@Deprecated
public final HttpServer cookieCodec(ServerCookieEncoder encoder, ServerCookieDecoder decoder) {
Objects.requireNonNull(encoder, "encoder");
Objects.requireNonNull(decoder, "decoder");
HttpServer dup = duplicate();
dup.configuration().cookieEncoder = encoder;
dup.configuration().cookieDecoder = decoder;
return dup;
}
/**
* Specifies a custom request handler for deriving information about the connection.
*
* @param handler the forwarded header handler
* @return a new {@link HttpServer}
* @since 0.9.12
*/
public final HttpServer forwarded(BiFunction<ConnectionInfo, HttpRequest, ConnectionInfo> handler) {
Objects.requireNonNull(handler, "handler");
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = handler;
return dup;
}
/**
* Specifies whether support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled.
*
* @param forwardedEnabled if true support for the {@code "Forwarded"} and {@code "X-Forwarded-*"}
* HTTP request headers for deriving information about the connection is enabled,
* otherwise disabled.
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer forwarded(boolean forwardedEnabled) {
if (forwardedEnabled) {
if (configuration().forwardedHeaderHandler == DefaultHttpForwardedHeaderHandler.INSTANCE) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = DefaultHttpForwardedHeaderHandler.INSTANCE;
return dup;
}
else if (configuration().forwardedHeaderHandler != null) {
HttpServer dup = duplicate();
dup.configuration().forwardedHeaderHandler = null;
return dup;
}
return this;
}
/**
* Attach an I/O handler to react on a connected client
*
* @param handler an I/O handler that can dispose underlying connection when {@link
* Publisher} terminates. Only the first registered handler will subscribe to the
* returned {@link Publisher} while other will immediately cancel given a same
* {@link Connection}
*
* @return a new {@link HttpServer}
*/
public final HttpServer handle(
BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
Objects.requireNonNull(handler, "handler");
return childObserve(new HttpServerHandle(handler));
}
@Override
public final HttpServer host(String host) {
return super.host(host);
}
/**
* Apply HTTP/2 configuration
*
* @param http2Settings configures {@link Http2SettingsSpec} before requesting
* @return a new {@link HttpServer}
*/
public final HttpServer http2Settings(Consumer<Http2SettingsSpec.Builder> http2Settings) {
Objects.requireNonNull(http2Settings, "http2Settings");
Http2SettingsSpec.Builder builder = Http2SettingsSpec.builder();
http2Settings.accept(builder);
Http2SettingsSpec settings = builder.build();
if (settings.equals(configuration().http2Settings)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().http2Settings = settings;
return dup;
}
/**
* Apply HTTP form decoder configuration.
* The configuration is used when {@link HttpServerRequest#receiveForm()} is invoked.
* When a specific configuration per request is needed {@link HttpServerRequest#receiveForm(Consumer)}
* should be used.
*
* @param formDecoderBuilder {@link HttpServerFormDecoderProvider.Builder} for HTTP form decoder configuration
* @return a new {@link HttpServer}
* @since 1.0.11
*/
public final HttpServer httpFormDecoder(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider formDecoderProvider = builder.build();
if (formDecoderProvider.equals(configuration().formDecoderProvider)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().formDecoderProvider = formDecoderProvider;
return dup;
}
/**
* When {@link HttpMessage} is about to be logged the configured factory will be used for
* generating a sanitized log message.
* <p>
* Default to {@link ReactorNettyHttpMessageLogFactory}:
* <ul>
* <li>hides the query from the uri</li>
* <li>hides the headers values</li>
* <li>only {@link DecoderException} message is presented</li>
* </ul>
*
* @param httpMessageLogFactory the factory for generating the log message
* @return a new {@link HttpServer}
* @since 1.0.24
*/
public final HttpServer httpMessageLogFactory(HttpMessageLogFactory httpMessageLogFactory) {
Objects.requireNonNull(httpMessageLogFactory, "httpMessageLogFactory");
HttpServer dup = duplicate();
dup.configuration().httpMessageLogFactory = httpMessageLogFactory;
return dup;
}
/**
* Configure the {@link io.netty.handler.codec.http.HttpServerCodec}'s request decoding options.
*
* @param requestDecoderOptions a function to mutate the provided Http request decoder options
* @return a new {@link HttpServer}
*/
public final HttpServer httpRequestDecoder(Function<HttpRequestDecoderSpec, HttpRequestDecoderSpec> requestDecoderOptions) {
Objects.requireNonNull(requestDecoderOptions, "requestDecoderOptions");
HttpRequestDecoderSpec decoder = requestDecoderOptions.apply(new HttpRequestDecoderSpec()).build();
if (decoder.equals(configuration().decoder)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().decoder = decoder;
return dup;
}
/**
* Specifies an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms).
* Once the timeout is reached the connection will be closed.
* <p>If an {@code idleTimeout} is not specified, this indicates no timeout (i.e. infinite),
* which means the connection will be closed only if one of the peers decides to close it.
* <p>If the {@code idleTimeout} is less than {@code 1ms}, then {@code 1ms} will be the idle timeout.
* <p>By default {@code idleTimeout} is not specified.
*
* @param idleTimeout an idle timeout on the connection when it is waiting for an HTTP request (resolution: ms)
* @return a new {@link HttpServer}
* @since 0.9.15
*/
public final HttpServer idleTimeout(Duration idleTimeout) {
Objects.requireNonNull(idleTimeout, "idleTimeout");
HttpServer dup = duplicate();
dup.configuration().idleTimeout = idleTimeout;
return dup;
}
/**
* Decorate the configured I/O handler.
* See {@link #handle(BiFunction)}.
*
* @param mapHandle A {@link BiFunction} to decorate the configured I/O handler
* @return a new {@link HttpServer}
*/
public final HttpServer mapHandle(BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle) {
Objects.requireNonNull(mapHandle, "mapHandle");
HttpServer dup = duplicate();
dup.configuration().mapHandle = mapHandle;
return dup;
}
/**
* The maximum number of HTTP/1.1 requests which can be served until the connection is closed by the server.
* Setting this attribute to:
* <ul>
* <li><strong>-1</strong>: The connection serves unlimited number of requests. It is up to the I/O handler to decide
* to close the connection. This is the default behaviour.</li>
* <li><strong>1</strong>: The connection is marked as non persistent and serves just one request.</li>
* <li><strong>>1</strong>: The connection serves a number of requests up to the specified maximum number
* then the connection is closed by the server.</li>
* </ul>
* @param maxKeepAliveRequests the maximum number of HTTP/1.1 requests which can be served until
* the connection is closed by the server
* @return a new {@link HttpServer}
* @since 1.0.13
*/
public final HttpServer maxKeepAliveRequests(int maxKeepAliveRequests) {
if (maxKeepAliveRequests < -1 || maxKeepAliveRequests == 0) {
throw new IllegalArgumentException("maxKeepAliveRequests must be positive or -1");
}
HttpServer dup = duplicate();
dup.configuration().maxKeepAliveRequests = maxKeepAliveRequests;
return dup;
}
/**
* Whether to enable metrics to be collected and registered in Micrometer's
* {@link io.micrometer.core.instrument.Metrics#globalRegistry globalRegistry}
* under the name {@link reactor.netty.Metrics#HTTP_SERVER_PREFIX}.
* <p>{@code uriTagValue} function receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag.
* For example instead of using the actual uri {@code "/users/1"} as uri tag value, templated uri
* {@code "/users/{id}"} can be used.
* <p><strong>Note:</strong>
* It is strongly recommended to provide template-like form for the URIs. Without a conversion to a template-like form,
* each distinct URI leads to the creation of a distinct tag, which takes a lot of memory for the metrics.
* <p><strong>Note:</strong>
* It is strongly recommended applications to configure an upper limit for the number of the URI tags.
* For example:
* <pre class="code">
* Metrics.globalRegistry
* .config()
* .meterFilter(MeterFilter.maximumAllowableTags(HTTP_SERVER_PREFIX, URI, 100, MeterFilter.deny()));
* </pre>
* <p>By default metrics are not enabled.
*
* @param enable true enables metrics collection; false disables it
* @param uriTagValue a function that receives the actual uri and returns the uri tag value
* that will be used for the metrics with {@link reactor.netty.Metrics#URI} tag
* @return a new {@link HttpServer}
* @since 0.9.7
*/
public final HttpServer metrics(boolean enable, Function<String, String> uriTagValue) {
if (enable) {
if (!Metrics.isMicrometerAvailable() && !Metrics.isTracingAvailable()) {
throw new UnsupportedOperationException(
"To enable metrics, you must add the dependencies to `io.micrometer:micrometer-core`" +
" and `io.micrometer:micrometer-tracing` to the class path first");
}
if (uriTagValue == Function.<String>identity()) {
log.debug("Metrics are enabled with [uriTagValue=Function#identity]. " +
"It is strongly recommended to provide template-like form for the URIs. " +
"Without a conversion to a template-like form, each distinct URI leads " +
"to the creation of a distinct tag, which takes a lot of memory for the metrics.");
}
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(() -> configuration().defaultMetricsRecorder());
dup.configuration().uriTagValue = uriTagValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
@Override
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder) {
return super.metrics(enable, recorder);
}
/**
* Specifies whether the metrics are enabled on the {@link HttpServer}.
* All generated metrics are provided to the specified recorder which is only
* instantiated if metrics are being enabled (the instantiation is not lazy,
* but happens immediately, while configuring the {@link HttpServer}).
* <p>{@code uriValue} function receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* For example instead of using the actual uri {@code "/users/1"} as uri value, templated uri
* {@code "/users/{id}"} can be used.
*
* @param enable true enables metrics collection; false disables it
* @param recorder a supplier for the metrics recorder that receives the collected metrics
* @param uriValue a function that receives the actual uri and returns the uri value
* that will be used when the metrics are propagated to the recorder.
* @return a new {@link HttpServer}
*/
public final HttpServer metrics(boolean enable, Supplier<? extends ChannelMetricsRecorder> recorder, Function<String, String> uriValue) {
if (enable) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(recorder);
dup.configuration().uriTagValue = uriValue;
return dup;
}
else if (configuration().metricsRecorder() != null) {
HttpServer dup = duplicate();
dup.configuration().metricsRecorder(null);
dup.configuration().uriTagValue = null;
return dup;
}
else {
return this;
}
}
/**
* Removes any previously applied SSL configuration customization
*
* @return a new {@link HttpServer}
*/
public final HttpServer noSSL() {
if (configuration().isSecure()) {
HttpServer dup = duplicate();
dup.configuration().sslProvider = null;
return dup;
}
return this;
}
@Override
public final HttpServer port(int port) {
return super.port(port);
}
/**
* The HTTP protocol to support. Default is {@link HttpProtocol#HTTP11}.
*
* @param supportedProtocols The various {@link HttpProtocol} this server will support
*
* @return a new {@link HttpServer}
*/
public final HttpServer protocol(HttpProtocol... supportedProtocols) {
Objects.requireNonNull(supportedProtocols, "supportedProtocols");
HttpServer dup = duplicate();
dup.configuration().protocols(supportedProtocols);
return dup;
}
/**
* Specifies whether support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer is enabled.
*
* @param proxyProtocolSupportType
* <ul>
* <li>
* choose {@link ProxyProtocolSupportType#ON}
* to enable support for the {@code "HAProxy proxy protocol"}
* for deriving information about the address of the remote peer.
* </li>
* <li>choose {@link ProxyProtocolSupportType#OFF} to disable the proxy protocol support.</li>
* <li>
* choose {@link ProxyProtocolSupportType#AUTO}
* then each connection of the same {@link HttpServer} will auto detect whether there is proxy protocol,
* so {@link HttpServer} can accept requests with or without proxy protocol at the same time.
* </li>
* </ul>
*
* @return a new {@link HttpServer}
*/
public final HttpServer proxyProtocol(ProxyProtocolSupportType proxyProtocolSupportType) {
Objects.requireNonNull(proxyProtocolSupportType, "The parameter: proxyProtocolSupportType must not be null.");
if (proxyProtocolSupportType == configuration().proxyProtocolSupportType) {
return this;
}
if (proxyProtocolSupportType == ProxyProtocolSupportType.ON ||
proxyProtocolSupportType == ProxyProtocolSupportType.AUTO) {
if (!HAProxyMessageReader.isProxyProtocolAvailable()) {
throw new UnsupportedOperationException(
"To enable proxyProtocol, you must add the dependency `io.netty:netty-codec-haproxy`" +
" to the class path first");
}
}
HttpServer dup = duplicate();
dup.configuration().proxyProtocolSupportType = proxyProtocolSupportType;
return dup;
}
/**
* Specifies the maximum duration allowed between each network-level read operation while reading a given request
* content (resolution: ms). In other words, {@link io.netty.handler.timeout.ReadTimeoutHandler} is added to the
* channel pipeline after all the request headers are received, and removed from the channel pipeline after the
* content is fully received.
* If the {@code readTimeout} is {@code null}, any previous setting will be removed and no
* {@code readTimeout} will be applied.
* If the {@code readTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code readTimeout}.
*
* @param readTimeout the maximum duration allowed between each network-level read operation while reading a given
* request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
* @see io.netty.handler.timeout.ReadTimeoutHandler
*/
public final HttpServer readTimeout(@Nullable Duration readTimeout) {
if (Objects.equals(readTimeout, configuration().readTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().readTimeout = readTimeout;
return dup;
}
/**
* Specifies the maximum duration for reading a given request content (resolution: ms).
* If the {@code requestTimeout} is {@code null}, any previous setting will be removed and no
* {@code requestTimeout} will be applied.
* If the {@code requestTimeout} is less than {@code 1ms}, then {@code 1ms} will be the
* {@code requestTimeout}.
*
* @param requestTimeout the maximum duration for reading a given request content (resolution: ms)
* @return a new {@link HttpServer}
* @since 1.1.9
*/
public final HttpServer requestTimeout(@Nullable Duration requestTimeout) {
if (Objects.equals(requestTimeout, configuration().requestTimeout)) {
return this;
}
HttpServer dup = duplicate();
dup.configuration().requestTimeout = requestTimeout;
return dup;
}
/**
* Define routes for the server through the provided {@link HttpServerRoutes} builder.
*
* @param routesBuilder provides a route builder to be mutated in order to define routes.
* @return a new {@link HttpServer} starting the router on subscribe
*/
public final HttpServer route(Consumer<? super HttpServerRoutes> routesBuilder) {
Objects.requireNonNull(routesBuilder, "routeBuilder");
HttpServerRoutes routes = HttpServerRoutes.newRoutes();
routesBuilder.accept(routes);
return handle(routes);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @return a new {@link HttpServer}
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder) {
return secure(sslProviderBuilder, false);
}
/**
* Apply an SSL configuration customization via the passed builder. The builder
* will produce the {@link SslContext} to be passed to with a default value of
* {@code 10} seconds handshake timeout unless the environment property {@code
* reactor.netty.tcp.sslHandshakeTimeout} is set.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProviderBuilder builder callback for further customization of SslContext.
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(Consumer<? super SslProvider.SslContextSpec> sslProviderBuilder, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProviderBuilder, "sslProviderBuilder");
HttpServer dup = duplicate();
SslProvider.SslContextSpec builder = SslProvider.builder();
sslProviderBuilder.accept(builder);
dup.configuration().sslProvider = ((SslProvider.Builder) builder).build();
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
*
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec));
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
*
* @return a new {@link HttpServer}
*/
public final HttpServer secure(SslProvider sslProvider) {
return secure(sslProvider, false);
}
/**
* Applies an SSL configuration via the passed {@link SslProvider}.
* <p>
* If {@link SelfSignedCertificate} needs to be used, the sample below can be
* used. Note that {@link SelfSignedCertificate} should not be used in production.
* <pre>
* {@code
* SelfSignedCertificate cert = new SelfSignedCertificate();
* Http11SslContextSpec http11SslContextSpec =
* Http11SslContextSpec.forServer(cert.certificate(), cert.privateKey());
* secure(sslContextSpec -> sslContextSpec.sslContext(http11SslContextSpec), true);
* }
* </pre>
*
* @param sslProvider The provider to set when configuring SSL
* @param redirectHttpToHttps true enables redirecting HTTP to HTTPS by changing the
* scheme only but otherwise leaving the port the same.
* This configuration is applicable only for HTTP 1.x.
* @return a new {@link HttpServer}
* @since 1.0.5
*/
public final HttpServer secure(SslProvider sslProvider, boolean redirectHttpToHttps) {
Objects.requireNonNull(sslProvider, "sslProvider");
HttpServer dup = duplicate();
dup.configuration().sslProvider = sslProvider;
dup.configuration().redirectHttpToHttps = redirectHttpToHttps;
return dup;
}
/**
* Apply a {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* <p>
* <strong>Note:</strong>
* There isn't only one method that replaces this deprecated method.
* The configuration that can be done with this deprecated method,
* can also be done with the other methods exposed by {@link HttpServer}.
* </p>
* <p>Examples:</p>
* <p>Configuration via the deprecated '.tcpConfiguration(...)' method</p>
* <pre>
* {@code
* HttpServer.tcpConfiguration(tcpServer ->
* tcpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap()) // configures the wire logging
* }
* </pre>
*
* <p>Configuration via the other methods exposed by {@link HttpServer}</p>
* <pre>
* {@code
* HttpServer.attr(...) // configures the channel attributes
* .bindAddress(...) // configures the bind (local) address
* .channelGroup(...) // configures the channel group
* .childAttr(...) // configures the child channel attributes
* .childObserve(...) // configures the child channel connection observer
* .childOption(...) // configures the child channel options
* .doOnBound(...) // configures the doOnBound callback
* .doOnChannelInit(...) // configures the channel handler
* .doOnConnection(...) // configures the doOnConnection callback
* .doOnUnbound(...) // configures the doOnUnbound callback
* .handle(...) // configures the I/O handler
* .host(...) // configures the host name
* .metrics(...) // configures the metrics
* .noSSL() // removes SSL configuration
* .observe() // configures the connection observer
* .option(...) // configures the channel options
* .port(...) // configures the port
* .runOn(...) // configures the event loop group
* .secure() // configures the SSL
* .wiretap() // configures the wire logging
* }
* </pre>
*
* <p>Wire logging in plain text</p>
* <pre>
* {@code
* HttpServer.wiretap("logger", LogLevel.DEBUG, AdvancedByteBufFormat.TEXTUAL)
* }
* </pre>
*
* @param tcpMapper A {@link TcpServer} mapping function to update TCP configuration and
* return an enriched {@link HttpServer} to use.
* @return a new {@link HttpServer}
* @deprecated Use the other methods exposed by {@link HttpServer} to achieve the same configurations.
* This method will be removed in version 1.1.0.
*/
@Deprecated
@SuppressWarnings("ReturnValueIgnored")
public final HttpServer tcpConfiguration(Function<? super TcpServer, ? extends TcpServer> tcpMapper) {
Objects.requireNonNull(tcpMapper, "tcpMapper");
HttpServerTcpConfig tcpServer = new HttpServerTcpConfig(this);
// ReturnValueIgnored is deliberate
tcpMapper.apply(tcpServer);
return tcpServer.httpServer;
}
/**
* Based on the actual configuration, returns a {@link Mono} that triggers:
* <ul>
* <li>an initialization of the event loop groups</li>
* <li>loads the necessary native libraries for the transport</li>
* <li>loads the necessary native libraries for the security if there is such</li>
* </ul>
* By default, when method is not used, the {@code bind operation} absorbs the extra time needed to load resources.
*
* @return a {@link Mono} representing the completion of the warmup
* @since 1.0.3
*/
@Override
public Mono<Void> warmup() {
return Mono.when(
super.warmup(),
Mono.fromRunnable(() -> {
SslProvider provider = configuration().sslProvider();
if (provider != null && !(provider.getSslContext() instanceof JdkSslContext)) {
OpenSsl.version();
}
}));
}
@Override
public final HttpServer wiretap(boolean enable) {
return super.wiretap(enable);
}
static final Logger log = Loggers.getLogger(HttpServer.class);
static final class HttpServerHandle implements ConnectionObserver {
final BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler;
HttpServerHandle(BiFunction<? super HttpServerRequest, ? super HttpServerResponse, ? extends Publisher<Void>> handler) {
this.handler = handler;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void onStateChange(Connection connection, State newState) {
if (newState == HttpServerState.REQUEST_RECEIVED) {
try {
if (log.isDebugEnabled()) {
log.debug(format(connection.channel(), "Handler is being applied: {}"), handler);
}
HttpServerOperations ops = (HttpServerOperations) connection;
Publisher<Void> publisher = handler.apply(ops, ops);
Mono<Void> mono = Mono.deferContextual(ctx -> {
ops.currentContext = Context.of(ctx);
return Mono.fromDirect(publisher);
});
if (ops.mapHandle != null) {
mono = ops.mapHandle.apply(mono, connection);
}
mono.subscribe(ops.disposeSubscriber());
}
catch (Throwable t) {
log.error(format(connection.channel(), ""), t);
//"FutureReturnValueIgnored" this is deliberate
connection.channel()
.close();
}
}
}
}
}
| violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | Fix. Thanks. | violetagg | 9 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServerOperations.java | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.ZonedDateTime;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.atomic.AtomicLong;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.DefaultHeaders;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.DefaultHttpHeaders;
import io.netty.handler.codec.http.DefaultHttpResponse;
import io.netty.handler.codec.http.DefaultLastHttpContent;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpContent;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpObject;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.HttpResponse;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpUtil;
import io.netty.handler.codec.http.HttpVersion;
import io.netty.handler.codec.http.LastHttpContent;
import io.netty.handler.codec.http.TooLongHttpHeaderException;
import io.netty.handler.codec.http.TooLongHttpLineException;
import io.netty.handler.codec.http.cookie.Cookie;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.codec.http.multipart.HttpData;
import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
import io.netty.handler.codec.http.websocketx.CloseWebSocketFrame;
import io.netty.handler.codec.http.websocketx.WebSocketCloseStatus;
import io.netty.util.AsciiString;
import io.netty.util.ReferenceCountUtil;
import org.reactivestreams.Publisher;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.FutureMono;
import reactor.netty.NettyOutbound;
import reactor.netty.NettyPipeline;
import reactor.netty.ReactorNetty;
import reactor.netty.channel.AbortedException;
import reactor.netty.channel.ChannelOperations;
import reactor.netty.http.HttpOperations;
import reactor.netty.http.logging.HttpMessageArgProviderFactory;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.websocket.WebsocketInbound;
import reactor.netty.http.websocket.WebsocketOutbound;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static io.netty.buffer.Unpooled.EMPTY_BUFFER;
import static io.netty.handler.codec.http.HttpUtil.isTransferEncodingChunked;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.http.server.HttpServerFormDecoderProvider.DEFAULT_FORM_DECODER_SPEC;
import static reactor.netty.http.server.HttpServerState.REQUEST_DECODING_FAILED;
/**
* Conversion between Netty types and Reactor types ({@link HttpOperations}.
*
* @author Stephane Maldini1
*/
class HttpServerOperations extends HttpOperations<HttpServerRequest, HttpServerResponse>
implements HttpServerRequest, HttpServerResponse {
final BiPredicate<HttpServerRequest, HttpServerResponse> configuredCompressionPredicate;
final ConnectionInfo connectionInfo;
final ServerCookieDecoder cookieDecoder;
final ServerCookieEncoder cookieEncoder;
final ServerCookies cookieHolder;
final HttpServerFormDecoderProvider formDecoderProvider;
final boolean isHttp2;
final BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle;
final HttpRequest nettyRequest;
final HttpResponse nettyResponse;
final HttpHeaders responseHeaders;
final String scheme;
final ZonedDateTime timestamp;
BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate;
Function<? super String, Map<String, String>> paramsResolver;
String path;
Consumer<? super HttpHeaders> trailerHeadersConsumer;
volatile Context currentContext;
HttpServerOperations(HttpServerOperations replaced) {
super(replaced);
this.compressionPredicate = replaced.compressionPredicate;
this.configuredCompressionPredicate = replaced.configuredCompressionPredicate;
this.connectionInfo = replaced.connectionInfo;
this.cookieDecoder = replaced.cookieDecoder;
this.cookieEncoder = replaced.cookieEncoder;
this.cookieHolder = replaced.cookieHolder;
this.currentContext = replaced.currentContext;
this.formDecoderProvider = replaced.formDecoderProvider;
this.isHttp2 = replaced.isHttp2;
this.mapHandle = replaced.mapHandle;
this.nettyRequest = replaced.nettyRequest;
this.nettyResponse = replaced.nettyResponse;
this.paramsResolver = replaced.paramsResolver;
this.path = replaced.path;
this.responseHeaders = replaced.responseHeaders;
this.scheme = replaced.scheme;
this.timestamp = replaced.timestamp;
this.trailerHeadersConsumer = replaced.trailerHeadersConsumer;
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
boolean secured,
ZonedDateTime timestamp) {
this(c, listener, nettyRequest, compressionPredicate, connectionInfo, decoder, encoder, formDecoderProvider,
httpMessageLogFactory, isHttp2, mapHandle, true, secured, timestamp);
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
boolean resolvePath,
boolean secured,
ZonedDateTime timestamp) {
super(c, listener, httpMessageLogFactory);
this.compressionPredicate = compressionPredicate;
this.configuredCompressionPredicate = compressionPredicate;
this.connectionInfo = connectionInfo;
this.cookieDecoder = decoder;
this.cookieEncoder = encoder;
this.cookieHolder = ServerCookies.newServerRequestHolder(nettyRequest.headers(), decoder);
this.currentContext = Context.empty();
this.formDecoderProvider = formDecoderProvider;
this.isHttp2 = isHttp2;
this.mapHandle = mapHandle;
this.nettyRequest = nettyRequest;
this.nettyResponse = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
if (resolvePath) {
this.path = resolvePath(nettyRequest.uri());
}
else {
this.path = null;
}
this.responseHeaders = nettyResponse.headers();
this.responseHeaders.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
this.scheme = secured ? "https" : "http";
this.timestamp = timestamp;
}
@Override
public NettyOutbound sendHeaders() {
if (hasSentHeaders()) {
return this;
}
return then(Mono.empty());
}
@Override
public HttpServerOperations withConnection(Consumer<? super Connection> withConnection) {
Objects.requireNonNull(withConnection, "withConnection");
withConnection.accept(this);
return this;
}
@Override
protected HttpMessage newFullBodyMessage(ByteBuf body) {
HttpResponse res =
new DefaultFullHttpResponse(version(), status(), body);
if (!HttpMethod.HEAD.equals(method())) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
int code = status().code();
if (!(HttpResponseStatus.NOT_MODIFIED.code() == code ||
HttpResponseStatus.NO_CONTENT.code() == code)) {
if (HttpUtil.getContentLength(nettyResponse, -1) == -1) {
responseHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, body.readableBytes());
}
}
}
// For HEAD requests:
// - if there is Transfer-Encoding and Content-Length, Transfer-Encoding will be removed
// - if there is only Transfer-Encoding, it will be kept and not replaced by
// Content-Length: body.readableBytes()
// For HEAD requests, the I/O handler may decide to provide only the headers and complete
// the response. In that case body will be EMPTY_BUFFER and if we set Content-Length: 0,
// this will not be correct
// https://github.com/reactor/reactor-netty/issues/1333
else if (HttpUtil.getContentLength(nettyResponse, -1) != -1) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
}
res.headers().set(responseHeaders);
return res;
}
@Override
public HttpServerResponse addCookie(Cookie cookie) {
if (!hasSentHeaders()) {
this.responseHeaders.add(HttpHeaderNames.SET_COOKIE,
cookieEncoder.encode(cookie));
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse addHeader(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.add(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerOperations chunkedTransfer(boolean chunked) {
if (!hasSentHeaders() && isTransferEncodingChunked(nettyResponse) != chunked) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
HttpUtil.setTransferEncodingChunked(nettyResponse, chunked);
}
return this;
}
@Override
public Map<CharSequence, Set<Cookie>> cookies() {
if (cookieHolder != null) {
return cookieHolder.getCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Map<CharSequence, List<Cookie>> allCookies() {
if (cookieHolder != null) {
return cookieHolder.getAllCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Context currentContext() {
return currentContext;
}
@Override
public HttpServerResponse header(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.set(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse headers(HttpHeaders headers) {
if (!hasSentHeaders()) {
this.responseHeaders.set(headers);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public boolean isFormUrlencoded() {
CharSequence mimeType = HttpUtil.getMimeType(nettyRequest);
return mimeType != null &&
HttpHeaderValues.APPLICATION_X_WWW_FORM_URLENCODED.contentEqualsIgnoreCase(mimeType.toString().trim());
}
@Override
public boolean isKeepAlive() {
return HttpUtil.isKeepAlive(nettyRequest);
}
@Override
public boolean isMultipart() {
return HttpPostRequestDecoder.isMultipart(nettyRequest);
}
@Override
public boolean isWebsocket() {
return get(channel()) instanceof WebsocketServerOperations;
}
final boolean isHttp2() {
return isHttp2;
}
@Override
public HttpServerResponse keepAlive(boolean keepAlive) {
HttpUtil.setKeepAlive(nettyResponse, keepAlive);
return this;
}
@Override
public HttpMethod method() {
return nettyRequest.method();
}
@Override
@Nullable
public String param(CharSequence key) {
Objects.requireNonNull(key, "key");
Map<String, String> params = null;
if (paramsResolver != null) {
params = this.paramsResolver.apply(uri());
}
return null != params ? params.get(key.toString()) : null;
}
@Override
@Nullable
public Map<String, String> params() {
return null != paramsResolver ? paramsResolver.apply(uri()) : null;
}
@Override
public HttpServerRequest paramsResolver(Function<? super String, Map<String, String>> paramsResolver) {
this.paramsResolver = paramsResolver;
return this;
}
@Override
public Flux<HttpData> receiveForm() {
return receiveFormInternal(formDecoderProvider);
}
@Override
public Flux<HttpData> receiveForm(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider config = builder.build();
return receiveFormInternal(config);
}
@Override
public Flux<?> receiveObject() {
// Handle the 'Expect: 100-continue' header if necessary.
// TODO: Respond with 413 Request Entity Too Large
// and discard the traffic or close the connection.
// No need to notify the upstream handlers - just log.
// If decoding a response, just throw an error.
if (HttpUtil.is100ContinueExpected(nettyRequest)) {
return FutureMono.deferFuture(() -> {
if (!hasSentHeaders()) {
return channel().writeAndFlush(CONTINUE);
}
return channel().newSucceededFuture();
})
.thenMany(super.receiveObject());
}
else {
return super.receiveObject();
}
}
@Override
@Nullable
public InetSocketAddress hostAddress() {
return this.connectionInfo.getHostAddress();
}
final SocketAddress hostSocketAddress() {
return this.connectionInfo.hostAddress;
}
@Override
@Nullable
public SocketAddress connectionHostAddress() {
return channel().localAddress();
}
@Override
@Nullable
public InetSocketAddress remoteAddress() {
return this.connectionInfo.getRemoteAddress();
}
final SocketAddress remoteSocketAddress() {
return this.connectionInfo.remoteAddress;
}
@Override
@Nullable
public SocketAddress connectionRemoteAddress() {
return channel().remoteAddress();
}
@Override
public HttpHeaders requestHeaders() {
if (nettyRequest != null) {
return nettyRequest.headers();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String scheme() {
return this.connectionInfo.getScheme();
}
@Override
public String connectionScheme() {
return scheme;
}
@Override
public String hostName() {
return connectionInfo.getHostName();
}
@Override
public int hostPort() {
return connectionInfo.getHostPort();
}
@Override
public HttpHeaders responseHeaders() {
return responseHeaders;
}
@Override
public String protocol() {
return nettyRequest.protocolVersion().text();
}
@Override
public ZonedDateTime timestamp() {
return timestamp;
}
@Override
public Mono<Void> send() {
return FutureMono.deferFuture(() -> markSentHeaderAndBody() ?
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER)) :
channel().newSucceededFuture());
}
@Override
public NettyOutbound sendFile(Path file) {
try {
return sendFile(file, 0L, Files.size(file));
}
catch (IOException e) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Path not resolved"), e);
}
return then(sendNotFound());
}
}
@Override
public Mono<Void> sendNotFound() {
return this.status(HttpResponseStatus.NOT_FOUND)
.send();
}
@Override
public Mono<Void> sendRedirect(String location) {
Objects.requireNonNull(location, "location");
return this.status(HttpResponseStatus.FOUND)
.header(HttpHeaderNames.LOCATION, location)
.send();
}
/**
* @return the Transfer setting SSE for this http connection (e.g. event-stream)
*/
@Override
public HttpServerResponse sse() {
header(HttpHeaderNames.CONTENT_TYPE, EVENT_STREAM);
return this;
}
@Override
public HttpResponseStatus status() {
return this.nettyResponse.status();
}
@Override
public HttpServerResponse status(HttpResponseStatus status) {
if (!hasSentHeaders()) {
this.nettyResponse.setStatus(status);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse trailerHeaders(Consumer<? super HttpHeaders> trailerHeaders) {
this.trailerHeadersConsumer = Objects.requireNonNull(trailerHeaders, "trailerHeaders");
return this;
}
@Override
public Mono<Void> sendWebsocket(
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler,
WebsocketServerSpec configurer) {
return withWebsocketSupport(uri(), configurer, websocketHandler);
}
@Override
public String uri() {
if (nettyRequest != null) {
return nettyRequest.uri();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String fullPath() {
if (path != null) {
return path;
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpVersion version() {
if (nettyRequest != null) {
return nettyRequest.protocolVersion();
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpServerResponse compression(boolean compress) {
compressionPredicate = compress ? configuredCompressionPredicate : COMPRESSION_DISABLED;
if (!compress) {
removeHandler(NettyPipeline.CompressionHandler);
}
else if (channel().pipeline()
.get(NettyPipeline.CompressionHandler) == null) {
SimpleCompressionHandler handler = new SimpleCompressionHandler();
try {
//Do not invoke handler.channelRead as it will trigger ctx.fireChannelRead
handler.decode(channel().pipeline().context(NettyPipeline.ReactiveBridge), nettyRequest);
addHandlerFirst(NettyPipeline.CompressionHandler, handler);
}
catch (Throwable e) {
log.error(format(channel(), ""), e);
}
}
return this;
}
@Override
protected void onInboundNext(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
try {
listener().onStateChange(this, HttpServerState.REQUEST_RECEIVED);
}
catch (Exception e) {
onInboundError(e);
ReferenceCountUtil.release(msg);
return;
}
if (msg instanceof FullHttpRequest) {
FullHttpRequest request = (FullHttpRequest) msg;
if (request.content().readableBytes() > 0) {
super.onInboundNext(ctx, msg);
}
else {
request.release();
}
if (isHttp2()) {
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
return;
}
if (msg instanceof HttpContent) {
if (msg != LastHttpContent.EMPTY_LAST_CONTENT) {
super.onInboundNext(ctx, msg);
}
if (msg instanceof LastHttpContent) {
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
else {
super.onInboundNext(ctx, msg);
}
}
@Override
protected void onInboundClose() {
discardWhenNoReceiver();
if (!(isInboundCancelled() || isInboundDisposed())) {
onInboundError(new AbortedException("Connection has been closed"));
}
terminate();
}
@Override
protected void afterMarkSentHeaders() {
if (compressionPredicate != null && compressionPredicate.test(this, this)) {
compression(true);
}
}
@Override
protected void beforeMarkSentHeaders() {
//noop
}
@Override
protected boolean isContentAlwaysEmpty() {
int code = status().code();
if (HttpResponseStatus.NOT_MODIFIED.code() == code) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING)
.remove(HttpHeaderNames.CONTENT_LENGTH);
return true;
}
return HttpResponseStatus.NO_CONTENT.code() == code ||
HttpResponseStatus.RESET_CONTENT.code() == code;
}
@Override
protected void onHeadersSent() {
//noop
}
@Override
protected void onOutboundComplete() {
if (isWebsocket()) {
return;
}
final ChannelFuture f;
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Last HTTP response frame"));
}
if (markSentHeaderAndBody()) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "No sendHeaders() called before complete, sending " +
"zero-length header"));
}
f = channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER));
}
else if (markSentBody()) {
HttpHeaders trailerHeaders = null;
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A trailer allows the sender to include additional fields at the end
// of a chunked message in order to supply metadata that might be
// dynamically generated while the message body is sent, such as a
// message integrity check, digital signature, or post-processing
// status.
if (trailerHeadersConsumer != null && isTransferEncodingChunked(nettyResponse)) {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.4
// When a message includes a message body encoded with the chunked
// transfer coding and the sender desires to send metadata in the form
// of trailer fields at the end of the message, the sender SHOULD
// generate a Trailer header field before the message body to indicate
// which fields will be present in the trailers.
String declaredHeaderNames = responseHeaders.get(HttpHeaderNames.TRAILER);
if (declaredHeaderNames != null) {
trailerHeaders = new TrailerHeaders(declaredHeaderNames);
try {
trailerHeadersConsumer.accept(trailerHeaders);
}
catch (IllegalArgumentException e) {
// A sender MUST NOT generate a trailer when header names are
// HttpServerOperations.TrailerHeaders.DISALLOWED_TRAILER_HEADER_NAMES
log.error(format(channel(), "Cannot apply trailer headers [{}]"), declaredHeaderNames, e);
}
}
}
f = channel().writeAndFlush(trailerHeaders != null && !trailerHeaders.isEmpty() ?
new DefaultLastHttpContent(Unpooled.buffer(0), trailerHeaders) :
LastHttpContent.EMPTY_LAST_CONTENT);
}
else {
discard();
return;
}
f.addListener(s -> {
discard();
if (!s.isSuccess() && log.isDebugEnabled()) {
log.debug(format(channel(), "Failed flushing last frame"), s.cause());
}
});
}
static void cleanHandlerTerminate(Channel ch) {
ChannelOperations<?, ?> ops = get(ch);
if (ops == null) {
return;
}
ops.discard();
//Try to defer the disposing to leave a chance for any synchronous complete following this callback
if (!ops.isSubscriptionDisposed()) {
ch.eventLoop()
.execute(((HttpServerOperations) ops)::terminate);
}
else {
//if already disposed, we can immediately call terminate
((HttpServerOperations) ops).terminate();
}
}
static long requestsCounter(Channel channel) {
HttpServerOperations ops = Connection.from(channel).as(HttpServerOperations.class);
if (ops == null) {
return -1;
}
return ((AtomicLong) ops.connection()).get();
}
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
sendDecodingFailures(ctx, listener, secure, t, msg, httpMessageLogFactory, false, timestamp, connectionInfo, remoteAddress);
}
@SuppressWarnings("FutureReturnValueIgnored")
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
Throwable cause = t.getCause() != null ? t.getCause() : t;
if (log.isWarnEnabled()) {
log.warn(format(ctx.channel(), "Decoding failed: {}"),
msg instanceof HttpObject ?
httpMessageLogFactory.warn(HttpMessageArgProviderFactory.create(msg)) : msg);
}
ReferenceCountUtil.release(msg);
final HttpResponseStatus status;
if (cause instanceof TooLongHttpLineException) {
status = HttpResponseStatus.REQUEST_URI_TOO_LONG;
}
else if (cause instanceof TooLongHttpHeaderException) {
status = HttpResponseStatus.REQUEST_HEADER_FIELDS_TOO_LARGE;
}
else {
status = HttpResponseStatus.BAD_REQUEST;
}
HttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, status);
response.headers()
.setInt(HttpHeaderNames.CONTENT_LENGTH, 0)
.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
Connection ops = ChannelOperations.get(ctx.channel());
if (ops == null) {
Connection conn = Connection.from(ctx.channel());
if (msg instanceof HttpRequest) {
ops = new FailedHttpServerRequest(conn, listener, (HttpRequest) msg, response, httpMessageLogFactory, isHttp2,
secure, timestamp == null ? ZonedDateTime.now(ReactorNetty.ZONE_ID_SYSTEM) : timestamp,
connectionInfo == null ? new ConnectionInfo(ctx.channel().localAddress(), remoteAddress, secure) : connectionInfo);
ops.bind();
}
else {
ops = conn;
}
}
//"FutureReturnValueIgnored" this is deliberate
ctx.channel().writeAndFlush(response);
listener.onStateChange(ops, REQUEST_DECODING_FAILED);
}
/**
* There is no need of invoking {@link #discard()}, the inbound will
* be canceled on channel inactive event if there is no subscriber available
*
* @param err the {@link Throwable} cause
*/
@Override
protected void onOutboundError(Throwable err) {
if (!channel().isActive()) {
super.onOutboundError(err);
return;
}
if (markSentHeaders()) {
log.error(format(channel(), "Error starting response. Replying error status"), err);
nettyResponse.setStatus(HttpResponseStatus.INTERNAL_SERVER_ERROR);
responseHeaders.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER))
.addListener(ChannelFutureListener.CLOSE);
return;
}
markSentBody();
log.error(format(channel(), "Error finishing response. Closing connection"), err);
channel().writeAndFlush(EMPTY_BUFFER)
.addListener(ChannelFutureListener.CLOSE);
}
@Override
protected HttpMessage outboundHttpMessage() {
return nettyResponse;
}
final Flux<HttpData> receiveFormInternal(HttpServerFormDecoderProvider config) {
boolean isMultipart = isMultipart();
if (!Objects.equals(method(), HttpMethod.POST) || !(isFormUrlencoded() || isMultipart)) {
return Flux.error(new IllegalStateException(
"Request is not POST or does not have Content-Type " +
"with value 'application/x-www-form-urlencoded' or 'multipart/form-data'"));
}
return Flux.defer(() ->
config.newHttpPostRequestDecoder(nettyRequest, isMultipart).flatMapMany(decoder ->
receiveObject() // receiveContent uses filter operator, this operator buffers, but we don't want it
.concatMap(object -> {
if (!(object instanceof HttpContent)) {
return Mono.empty();
}
HttpContent httpContent = (HttpContent) object;
if (config.maxInMemorySize > -1) {
httpContent.retain();
}
return config.maxInMemorySize == -1 ?
Flux.using(
() -> decoder.offer(httpContent),
d -> Flux.fromIterable(decoder.currentHttpData(!config.streaming)),
d -> decoder.cleanCurrentHttpData(!config.streaming)) :
Flux.usingWhen(
Mono.fromCallable(() -> decoder.offer(httpContent))
.subscribeOn(config.scheduler)
.doFinally(sig -> httpContent.release()),
d -> Flux.fromIterable(decoder.currentHttpData(true)),
// FIXME Can we have cancellation for the resourceSupplier that will
// cause this one to not be invoked?
d -> Mono.fromRunnable(() -> decoder.cleanCurrentHttpData(true)));
}, 0) // There is no need of prefetch, we already have the buffers in the Reactor Netty inbound queue
.doFinally(sig -> decoder.destroy())));
}
final Mono<Void> withWebsocketSupport(String url,
WebsocketServerSpec websocketServerSpec,
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler) {
Objects.requireNonNull(websocketServerSpec, "websocketServerSpec");
Objects.requireNonNull(websocketHandler, "websocketHandler");
if (markSentHeaders()) {
WebsocketServerOperations ops = new WebsocketServerOperations(url, websocketServerSpec, this);
return FutureMono.from(ops.handshakerResult)
.doOnEach(signal -> {
if (!signal.hasError() && (websocketServerSpec.protocols() == null || ops.selectedSubprotocol() != null)) {
websocketHandler.apply(ops, ops)
.subscribe(new WebsocketSubscriber(ops, Context.of(signal.getContextView())));
}
});
}
else {
log.error(format(channel(), "Cannot enable websocket if headers have already been sent"));
}
return Mono.error(new IllegalStateException("Failed to upgrade to websocket"));
}
static final class WebsocketSubscriber implements CoreSubscriber<Void>, ChannelFutureListener {
final WebsocketServerOperations ops;
final Context context;
WebsocketSubscriber(WebsocketServerOperations ops, Context context) {
this.ops = ops;
this.context = context;
}
@Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE);
}
@Override
public void onNext(Void aVoid) {
}
@Override
public void onError(Throwable t) {
ops.onError(t);
}
@Override
public void operationComplete(ChannelFuture future) {
ops.terminate();
}
@Override
public void onComplete() {
if (ops.channel()
.isActive()) {
ops.sendCloseNow(new CloseWebSocketFrame(WebSocketCloseStatus.NORMAL_CLOSURE), this);
}
}
@Override
public Context currentContext() {
return context;
}
}
static final Logger log = Loggers.getLogger(HttpServerOperations.class);
final static AsciiString EVENT_STREAM = new AsciiString("text/event-stream");
static final BiPredicate<HttpServerRequest, HttpServerResponse> COMPRESSION_DISABLED = (req, res) -> false;
final static FullHttpResponse CONTINUE =
new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.CONTINUE,
EMPTY_BUFFER);
static final class FailedHttpServerRequest extends HttpServerOperations {
final HttpResponse customResponse;
FailedHttpServerRequest(
Connection c,
ConnectionObserver listener,
HttpRequest nettyRequest,
HttpResponse nettyResponse,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
boolean secure,
ZonedDateTime timestamp,
ConnectionInfo connectionInfo) {
super(c, listener, nettyRequest, null, connectionInfo,
ServerCookieDecoder.STRICT, ServerCookieEncoder.STRICT, DEFAULT_FORM_DECODER_SPEC, httpMessageLogFactory, isHttp2,
null, false, secure, timestamp);
this.customResponse = nettyResponse;
String tempPath = "";
try {
tempPath = resolvePath(nettyRequest.uri());
}
catch (RuntimeException e) {
tempPath = "/bad-request";
}
finally {
this.path = tempPath;
}
}
@Override
protected HttpMessage outboundHttpMessage() {
return customResponse;
}
@Override
public HttpResponseStatus status() {
return customResponse.status();
}
}
static final class TrailerHeaders extends DefaultHttpHeaders {
static final Set<String> DISALLOWED_TRAILER_HEADER_NAMES = new HashSet<>(14);
static {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A sender MUST NOT generate a trailer that contains a field necessary
// for message framing (e.g., Transfer-Encoding and Content-Length),
// routing (e.g., Host), request modifiers (e.g., controls and
// conditionals in Section 5 of [RFC7231]), authentication (e.g., see
// [RFC7235] and [RFC6265]), response control data (e.g., see Section
// 7.1 of [RFC7231]), or determining how to process the payload (e.g.,
// Content-Encoding, Content-Type, Content-Range, and Trailer).
DISALLOWED_TRAILER_HEADER_NAMES.add("age");
DISALLOWED_TRAILER_HEADER_NAMES.add("cache-control");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-length");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-range");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-type");
DISALLOWED_TRAILER_HEADER_NAMES.add("date");
DISALLOWED_TRAILER_HEADER_NAMES.add("expires");
DISALLOWED_TRAILER_HEADER_NAMES.add("location");
DISALLOWED_TRAILER_HEADER_NAMES.add("retry-after");
DISALLOWED_TRAILER_HEADER_NAMES.add("trailer");
DISALLOWED_TRAILER_HEADER_NAMES.add("transfer-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("vary");
DISALLOWED_TRAILER_HEADER_NAMES.add("warning");
}
TrailerHeaders(String declaredHeaderNames) {
super(true, new TrailerNameValidator(filterHeaderNames(declaredHeaderNames)));
}
static Set<String> filterHeaderNames(String declaredHeaderNames) {
Objects.requireNonNull(declaredHeaderNames, "declaredHeaderNames");
Set<String> result = new HashSet<>();
String[] names = declaredHeaderNames.split(",", -1);
for (String name : names) {
String trimmedStr = name.trim();
if (trimmedStr.isEmpty() ||
DISALLOWED_TRAILER_HEADER_NAMES.contains(trimmedStr.toLowerCase(Locale.ENGLISH))) {
continue;
}
result.add(trimmedStr);
}
return result;
}
static final class TrailerNameValidator implements DefaultHeaders.NameValidator<CharSequence> {
/**
* Contains the headers names specified with {@link HttpHeaderNames#TRAILER}
*/
final Set<String> declaredHeaderNames;
TrailerNameValidator(Set<String> declaredHeaderNames) {
this.declaredHeaderNames = declaredHeaderNames;
}
@Override
public void validateName(CharSequence name) {
if (!declaredHeaderNames.contains(name.toString())) {
throw new IllegalArgumentException("Trailer header name [" + name +
"] not declared with [Trailer] header, or it is not a valid trailer header name");
}
}
}
}
}
| /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Duration;
import java.time.ZonedDateTime;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.DefaultHeaders;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.DefaultHttpHeaders;
import io.netty.handler.codec.http.DefaultHttpResponse;
import io.netty.handler.codec.http.DefaultLastHttpContent;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpContent;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpObject;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.HttpResponse;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpUtil;
import io.netty.handler.codec.http.HttpVersion;
import io.netty.handler.codec.http.LastHttpContent;
import io.netty.handler.codec.http.TooLongHttpHeaderException;
import io.netty.handler.codec.http.TooLongHttpLineException;
import io.netty.handler.codec.http.cookie.Cookie;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.codec.http.multipart.HttpData;
import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
import io.netty.handler.codec.http.websocketx.CloseWebSocketFrame;
import io.netty.handler.codec.http.websocketx.WebSocketCloseStatus;
import io.netty.handler.timeout.ReadTimeoutHandler;
import io.netty.util.AsciiString;
import io.netty.util.ReferenceCountUtil;
import org.reactivestreams.Publisher;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.FutureMono;
import reactor.netty.NettyOutbound;
import reactor.netty.NettyPipeline;
import reactor.netty.ReactorNetty;
import reactor.netty.channel.AbortedException;
import reactor.netty.channel.ChannelOperations;
import reactor.netty.http.HttpOperations;
import reactor.netty.http.logging.HttpMessageArgProviderFactory;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.websocket.WebsocketInbound;
import reactor.netty.http.websocket.WebsocketOutbound;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static io.netty.buffer.Unpooled.EMPTY_BUFFER;
import static io.netty.handler.codec.http.HttpUtil.isTransferEncodingChunked;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.http.server.HttpServerFormDecoderProvider.DEFAULT_FORM_DECODER_SPEC;
import static reactor.netty.http.server.HttpServerState.REQUEST_DECODING_FAILED;
/**
* Conversion between Netty types and Reactor types ({@link HttpOperations}.
*
* @author Stephane Maldini1
*/
class HttpServerOperations extends HttpOperations<HttpServerRequest, HttpServerResponse>
implements HttpServerRequest, HttpServerResponse {
final BiPredicate<HttpServerRequest, HttpServerResponse> configuredCompressionPredicate;
final ConnectionInfo connectionInfo;
final ServerCookieDecoder cookieDecoder;
final ServerCookieEncoder cookieEncoder;
final ServerCookies cookieHolder;
final HttpServerFormDecoderProvider formDecoderProvider;
final boolean isHttp2;
final BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle;
final HttpRequest nettyRequest;
final HttpResponse nettyResponse;
final Duration readTimeout;
final Duration requestTimeout;
final HttpHeaders responseHeaders;
final String scheme;
final ZonedDateTime timestamp;
BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate;
Function<? super String, Map<String, String>> paramsResolver;
String path;
Future<?> requestTimeoutFuture;
Consumer<? super HttpHeaders> trailerHeadersConsumer;
volatile Context currentContext;
HttpServerOperations(HttpServerOperations replaced) {
super(replaced);
this.compressionPredicate = replaced.compressionPredicate;
this.configuredCompressionPredicate = replaced.configuredCompressionPredicate;
this.connectionInfo = replaced.connectionInfo;
this.cookieDecoder = replaced.cookieDecoder;
this.cookieEncoder = replaced.cookieEncoder;
this.cookieHolder = replaced.cookieHolder;
this.currentContext = replaced.currentContext;
this.formDecoderProvider = replaced.formDecoderProvider;
this.isHttp2 = replaced.isHttp2;
this.mapHandle = replaced.mapHandle;
this.nettyRequest = replaced.nettyRequest;
this.nettyResponse = replaced.nettyResponse;
this.paramsResolver = replaced.paramsResolver;
this.path = replaced.path;
this.readTimeout = replaced.readTimeout;
this.requestTimeout = replaced.requestTimeout;
this.responseHeaders = replaced.responseHeaders;
this.scheme = replaced.scheme;
this.timestamp = replaced.timestamp;
this.trailerHeadersConsumer = replaced.trailerHeadersConsumer;
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
@Nullable Duration readTimeout,
@Nullable Duration requestTimeout,
boolean secured,
ZonedDateTime timestamp) {
this(c, listener, nettyRequest, compressionPredicate, connectionInfo, decoder, encoder, formDecoderProvider,
httpMessageLogFactory, isHttp2, mapHandle, readTimeout, requestTimeout, true, secured, timestamp);
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
@Nullable Duration readTimeout,
@Nullable Duration requestTimeout,
boolean resolvePath,
boolean secured,
ZonedDateTime timestamp) {
super(c, listener, httpMessageLogFactory);
this.compressionPredicate = compressionPredicate;
this.configuredCompressionPredicate = compressionPredicate;
this.connectionInfo = connectionInfo;
this.cookieDecoder = decoder;
this.cookieEncoder = encoder;
this.cookieHolder = ServerCookies.newServerRequestHolder(nettyRequest.headers(), decoder);
this.currentContext = Context.empty();
this.formDecoderProvider = formDecoderProvider;
this.isHttp2 = isHttp2;
this.mapHandle = mapHandle;
this.nettyRequest = nettyRequest;
this.nettyResponse = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
if (resolvePath) {
this.path = resolvePath(nettyRequest.uri());
}
else {
this.path = null;
}
this.readTimeout = readTimeout;
this.requestTimeout = requestTimeout;
this.responseHeaders = nettyResponse.headers();
this.responseHeaders.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
this.scheme = secured ? "https" : "http";
this.timestamp = timestamp;
}
@Override
public NettyOutbound sendHeaders() {
if (hasSentHeaders()) {
return this;
}
return then(Mono.empty());
}
@Override
public HttpServerOperations withConnection(Consumer<? super Connection> withConnection) {
Objects.requireNonNull(withConnection, "withConnection");
withConnection.accept(this);
return this;
}
@Override
protected HttpMessage newFullBodyMessage(ByteBuf body) {
HttpResponse res =
new DefaultFullHttpResponse(version(), status(), body);
if (!HttpMethod.HEAD.equals(method())) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
int code = status().code();
if (!(HttpResponseStatus.NOT_MODIFIED.code() == code ||
HttpResponseStatus.NO_CONTENT.code() == code)) {
if (HttpUtil.getContentLength(nettyResponse, -1) == -1) {
responseHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, body.readableBytes());
}
}
}
// For HEAD requests:
// - if there is Transfer-Encoding and Content-Length, Transfer-Encoding will be removed
// - if there is only Transfer-Encoding, it will be kept and not replaced by
// Content-Length: body.readableBytes()
// For HEAD requests, the I/O handler may decide to provide only the headers and complete
// the response. In that case body will be EMPTY_BUFFER and if we set Content-Length: 0,
// this will not be correct
// https://github.com/reactor/reactor-netty/issues/1333
else if (HttpUtil.getContentLength(nettyResponse, -1) != -1) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
}
res.headers().set(responseHeaders);
return res;
}
@Override
public HttpServerResponse addCookie(Cookie cookie) {
if (!hasSentHeaders()) {
this.responseHeaders.add(HttpHeaderNames.SET_COOKIE,
cookieEncoder.encode(cookie));
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse addHeader(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.add(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerOperations chunkedTransfer(boolean chunked) {
if (!hasSentHeaders() && isTransferEncodingChunked(nettyResponse) != chunked) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
HttpUtil.setTransferEncodingChunked(nettyResponse, chunked);
}
return this;
}
@Override
public Map<CharSequence, Set<Cookie>> cookies() {
if (cookieHolder != null) {
return cookieHolder.getCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Map<CharSequence, List<Cookie>> allCookies() {
if (cookieHolder != null) {
return cookieHolder.getAllCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Context currentContext() {
return currentContext;
}
@Override
public HttpServerResponse header(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.set(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse headers(HttpHeaders headers) {
if (!hasSentHeaders()) {
this.responseHeaders.set(headers);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public boolean isFormUrlencoded() {
CharSequence mimeType = HttpUtil.getMimeType(nettyRequest);
return mimeType != null &&
HttpHeaderValues.APPLICATION_X_WWW_FORM_URLENCODED.contentEqualsIgnoreCase(mimeType.toString().trim());
}
@Override
public boolean isKeepAlive() {
return HttpUtil.isKeepAlive(nettyRequest);
}
@Override
public boolean isMultipart() {
return HttpPostRequestDecoder.isMultipart(nettyRequest);
}
@Override
public boolean isWebsocket() {
return get(channel()) instanceof WebsocketServerOperations;
}
final boolean isHttp2() {
return isHttp2;
}
@Override
public HttpServerResponse keepAlive(boolean keepAlive) {
HttpUtil.setKeepAlive(nettyResponse, keepAlive);
return this;
}
@Override
public HttpMethod method() {
return nettyRequest.method();
}
@Override
@Nullable
public String param(CharSequence key) {
Objects.requireNonNull(key, "key");
Map<String, String> params = null;
if (paramsResolver != null) {
params = this.paramsResolver.apply(uri());
}
return null != params ? params.get(key.toString()) : null;
}
@Override
@Nullable
public Map<String, String> params() {
return null != paramsResolver ? paramsResolver.apply(uri()) : null;
}
@Override
public HttpServerRequest paramsResolver(Function<? super String, Map<String, String>> paramsResolver) {
this.paramsResolver = paramsResolver;
return this;
}
@Override
public Flux<HttpData> receiveForm() {
return receiveFormInternal(formDecoderProvider);
}
@Override
public Flux<HttpData> receiveForm(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider config = builder.build();
return receiveFormInternal(config);
}
@Override
public Flux<?> receiveObject() {
// Handle the 'Expect: 100-continue' header if necessary.
// TODO: Respond with 413 Request Entity Too Large
// and discard the traffic or close the connection.
// No need to notify the upstream handlers - just log.
// If decoding a response, just throw an error.
if (HttpUtil.is100ContinueExpected(nettyRequest)) {
return FutureMono.deferFuture(() -> {
if (!hasSentHeaders()) {
return channel().writeAndFlush(CONTINUE);
}
return channel().newSucceededFuture();
})
.thenMany(super.receiveObject());
}
else {
return super.receiveObject();
}
}
@Override
@Nullable
public InetSocketAddress hostAddress() {
return this.connectionInfo.getHostAddress();
}
final SocketAddress hostSocketAddress() {
return this.connectionInfo.hostAddress;
}
@Override
@Nullable
public SocketAddress connectionHostAddress() {
return channel().localAddress();
}
@Override
@Nullable
public InetSocketAddress remoteAddress() {
return this.connectionInfo.getRemoteAddress();
}
final SocketAddress remoteSocketAddress() {
return this.connectionInfo.remoteAddress;
}
@Override
@Nullable
public SocketAddress connectionRemoteAddress() {
return channel().remoteAddress();
}
@Override
public HttpHeaders requestHeaders() {
if (nettyRequest != null) {
return nettyRequest.headers();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String scheme() {
return this.connectionInfo.getScheme();
}
@Override
public String connectionScheme() {
return scheme;
}
@Override
public String hostName() {
return connectionInfo.getHostName();
}
@Override
public int hostPort() {
return connectionInfo.getHostPort();
}
@Override
public HttpHeaders responseHeaders() {
return responseHeaders;
}
@Override
public String protocol() {
return nettyRequest.protocolVersion().text();
}
@Override
public ZonedDateTime timestamp() {
return timestamp;
}
@Override
public Mono<Void> send() {
return FutureMono.deferFuture(() -> markSentHeaderAndBody() ?
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER)) :
channel().newSucceededFuture());
}
@Override
public NettyOutbound sendFile(Path file) {
try {
return sendFile(file, 0L, Files.size(file));
}
catch (IOException e) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Path not resolved"), e);
}
return then(sendNotFound());
}
}
@Override
public Mono<Void> sendNotFound() {
return this.status(HttpResponseStatus.NOT_FOUND)
.send();
}
@Override
public Mono<Void> sendRedirect(String location) {
Objects.requireNonNull(location, "location");
return this.status(HttpResponseStatus.FOUND)
.header(HttpHeaderNames.LOCATION, location)
.send();
}
/**
* @return the Transfer setting SSE for this http connection (e.g. event-stream)
*/
@Override
public HttpServerResponse sse() {
header(HttpHeaderNames.CONTENT_TYPE, EVENT_STREAM);
return this;
}
@Override
public HttpResponseStatus status() {
return this.nettyResponse.status();
}
@Override
public HttpServerResponse status(HttpResponseStatus status) {
if (!hasSentHeaders()) {
this.nettyResponse.setStatus(status);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse trailerHeaders(Consumer<? super HttpHeaders> trailerHeaders) {
this.trailerHeadersConsumer = Objects.requireNonNull(trailerHeaders, "trailerHeaders");
return this;
}
@Override
public Mono<Void> sendWebsocket(
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler,
WebsocketServerSpec configurer) {
return withWebsocketSupport(uri(), configurer, websocketHandler);
}
@Override
public String uri() {
if (nettyRequest != null) {
return nettyRequest.uri();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String fullPath() {
if (path != null) {
return path;
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpVersion version() {
if (nettyRequest != null) {
return nettyRequest.protocolVersion();
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpServerResponse compression(boolean compress) {
compressionPredicate = compress ? configuredCompressionPredicate : COMPRESSION_DISABLED;
if (!compress) {
removeHandler(NettyPipeline.CompressionHandler);
}
else if (channel().pipeline()
.get(NettyPipeline.CompressionHandler) == null) {
SimpleCompressionHandler handler = new SimpleCompressionHandler();
try {
//Do not invoke handler.channelRead as it will trigger ctx.fireChannelRead
handler.decode(channel().pipeline().context(NettyPipeline.ReactiveBridge), nettyRequest);
addHandlerFirst(NettyPipeline.CompressionHandler, handler);
}
catch (Throwable e) {
log.error(format(channel(), ""), e);
}
}
return this;
}
@Override
protected void onInboundNext(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
boolean isFullHttpRequest = msg instanceof FullHttpRequest;
if (!(isHttp2() && isFullHttpRequest)) {
if (readTimeout != null) {
addHandlerFirst(NettyPipeline.ReadTimeoutHandler,
new ReadTimeoutHandler(readTimeout.toMillis(), TimeUnit.MILLISECONDS));
}
if (requestTimeout != null) {
requestTimeoutFuture =
ctx.executor().schedule(new RequestTimeoutTask(ctx), Math.max(requestTimeout.toMillis(), 1), TimeUnit.MILLISECONDS);
}
}
try {
listener().onStateChange(this, HttpServerState.REQUEST_RECEIVED);
}
catch (Exception e) {
onInboundError(e);
ReferenceCountUtil.release(msg);
return;
}
if (isFullHttpRequest) {
FullHttpRequest request = (FullHttpRequest) msg;
if (request.content().readableBytes() > 0) {
super.onInboundNext(ctx, msg);
}
else {
request.release();
}
if (isHttp2()) {
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
return;
}
if (msg instanceof HttpContent) {
if (msg != LastHttpContent.EMPTY_LAST_CONTENT) {
super.onInboundNext(ctx, msg);
}
if (msg instanceof LastHttpContent) {
removeHandler(NettyPipeline.ReadTimeoutHandler);
if (requestTimeoutFuture != null) {
requestTimeoutFuture.cancel(false);
requestTimeoutFuture = null;
}
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
else {
super.onInboundNext(ctx, msg);
}
}
@Override
protected void onInboundClose() {
discardWhenNoReceiver();
if (!(isInboundCancelled() || isInboundDisposed())) {
onInboundError(new AbortedException("Connection has been closed"));
}
terminate();
}
@Override
protected void afterMarkSentHeaders() {
if (compressionPredicate != null && compressionPredicate.test(this, this)) {
compression(true);
}
}
@Override
protected void beforeMarkSentHeaders() {
//noop
}
@Override
protected boolean isContentAlwaysEmpty() {
int code = status().code();
if (HttpResponseStatus.NOT_MODIFIED.code() == code) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING)
.remove(HttpHeaderNames.CONTENT_LENGTH);
return true;
}
return HttpResponseStatus.NO_CONTENT.code() == code ||
HttpResponseStatus.RESET_CONTENT.code() == code;
}
@Override
protected void onHeadersSent() {
//noop
}
@Override
protected void onOutboundComplete() {
if (isWebsocket()) {
return;
}
final ChannelFuture f;
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Last HTTP response frame"));
}
if (markSentHeaderAndBody()) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "No sendHeaders() called before complete, sending " +
"zero-length header"));
}
f = channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER));
}
else if (markSentBody()) {
HttpHeaders trailerHeaders = null;
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A trailer allows the sender to include additional fields at the end
// of a chunked message in order to supply metadata that might be
// dynamically generated while the message body is sent, such as a
// message integrity check, digital signature, or post-processing
// status.
if (trailerHeadersConsumer != null && isTransferEncodingChunked(nettyResponse)) {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.4
// When a message includes a message body encoded with the chunked
// transfer coding and the sender desires to send metadata in the form
// of trailer fields at the end of the message, the sender SHOULD
// generate a Trailer header field before the message body to indicate
// which fields will be present in the trailers.
String declaredHeaderNames = responseHeaders.get(HttpHeaderNames.TRAILER);
if (declaredHeaderNames != null) {
trailerHeaders = new TrailerHeaders(declaredHeaderNames);
try {
trailerHeadersConsumer.accept(trailerHeaders);
}
catch (IllegalArgumentException e) {
// A sender MUST NOT generate a trailer when header names are
// HttpServerOperations.TrailerHeaders.DISALLOWED_TRAILER_HEADER_NAMES
log.error(format(channel(), "Cannot apply trailer headers [{}]"), declaredHeaderNames, e);
}
}
}
f = channel().writeAndFlush(trailerHeaders != null && !trailerHeaders.isEmpty() ?
new DefaultLastHttpContent(Unpooled.buffer(0), trailerHeaders) :
LastHttpContent.EMPTY_LAST_CONTENT);
}
else {
discard();
return;
}
f.addListener(s -> {
discard();
if (!s.isSuccess() && log.isDebugEnabled()) {
log.debug(format(channel(), "Failed flushing last frame"), s.cause());
}
});
}
static void cleanHandlerTerminate(Channel ch) {
ChannelOperations<?, ?> ops = get(ch);
if (ops == null) {
return;
}
ops.discard();
//Try to defer the disposing to leave a chance for any synchronous complete following this callback
if (!ops.isSubscriptionDisposed()) {
ch.eventLoop()
.execute(((HttpServerOperations) ops)::terminate);
}
else {
//if already disposed, we can immediately call terminate
((HttpServerOperations) ops).terminate();
}
}
static long requestsCounter(Channel channel) {
HttpServerOperations ops = Connection.from(channel).as(HttpServerOperations.class);
if (ops == null) {
return -1;
}
return ((AtomicLong) ops.connection()).get();
}
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
sendDecodingFailures(ctx, listener, secure, t, msg, httpMessageLogFactory, false, timestamp, connectionInfo, remoteAddress);
}
@SuppressWarnings("FutureReturnValueIgnored")
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
Throwable cause = t.getCause() != null ? t.getCause() : t;
if (log.isWarnEnabled()) {
log.warn(format(ctx.channel(), "Decoding failed: {}"),
msg instanceof HttpObject ?
httpMessageLogFactory.warn(HttpMessageArgProviderFactory.create(msg)) : msg);
}
ReferenceCountUtil.release(msg);
final HttpResponseStatus status;
if (cause instanceof TooLongHttpLineException) {
status = HttpResponseStatus.REQUEST_URI_TOO_LONG;
}
else if (cause instanceof TooLongHttpHeaderException) {
status = HttpResponseStatus.REQUEST_HEADER_FIELDS_TOO_LARGE;
}
else {
status = HttpResponseStatus.BAD_REQUEST;
}
HttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, status);
response.headers()
.setInt(HttpHeaderNames.CONTENT_LENGTH, 0)
.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
Connection ops = ChannelOperations.get(ctx.channel());
if (ops == null) {
Connection conn = Connection.from(ctx.channel());
if (msg instanceof HttpRequest) {
ops = new FailedHttpServerRequest(conn, listener, (HttpRequest) msg, response, httpMessageLogFactory, isHttp2,
secure, timestamp == null ? ZonedDateTime.now(ReactorNetty.ZONE_ID_SYSTEM) : timestamp,
connectionInfo == null ? new ConnectionInfo(ctx.channel().localAddress(), remoteAddress, secure) : connectionInfo);
ops.bind();
}
else {
ops = conn;
}
}
//"FutureReturnValueIgnored" this is deliberate
ctx.channel().writeAndFlush(response);
listener.onStateChange(ops, REQUEST_DECODING_FAILED);
}
/**
* There is no need of invoking {@link #discard()}, the inbound will
* be canceled on channel inactive event if there is no subscriber available
*
* @param err the {@link Throwable} cause
*/
@Override
protected void onOutboundError(Throwable err) {
if (!channel().isActive()) {
super.onOutboundError(err);
return;
}
if (markSentHeaders()) {
log.error(format(channel(), "Error starting response. Replying error status"), err);
nettyResponse.setStatus(HttpResponseStatus.INTERNAL_SERVER_ERROR);
responseHeaders.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER))
.addListener(ChannelFutureListener.CLOSE);
return;
}
markSentBody();
log.error(format(channel(), "Error finishing response. Closing connection"), err);
channel().writeAndFlush(EMPTY_BUFFER)
.addListener(ChannelFutureListener.CLOSE);
}
@Override
protected HttpMessage outboundHttpMessage() {
return nettyResponse;
}
final Flux<HttpData> receiveFormInternal(HttpServerFormDecoderProvider config) {
boolean isMultipart = isMultipart();
if (!Objects.equals(method(), HttpMethod.POST) || !(isFormUrlencoded() || isMultipart)) {
return Flux.error(new IllegalStateException(
"Request is not POST or does not have Content-Type " +
"with value 'application/x-www-form-urlencoded' or 'multipart/form-data'"));
}
return Flux.defer(() ->
config.newHttpPostRequestDecoder(nettyRequest, isMultipart).flatMapMany(decoder ->
receiveObject() // receiveContent uses filter operator, this operator buffers, but we don't want it
.concatMap(object -> {
if (!(object instanceof HttpContent)) {
return Mono.empty();
}
HttpContent httpContent = (HttpContent) object;
if (config.maxInMemorySize > -1) {
httpContent.retain();
}
return config.maxInMemorySize == -1 ?
Flux.using(
() -> decoder.offer(httpContent),
d -> Flux.fromIterable(decoder.currentHttpData(!config.streaming)),
d -> decoder.cleanCurrentHttpData(!config.streaming)) :
Flux.usingWhen(
Mono.fromCallable(() -> decoder.offer(httpContent))
.subscribeOn(config.scheduler)
.doFinally(sig -> httpContent.release()),
d -> Flux.fromIterable(decoder.currentHttpData(true)),
// FIXME Can we have cancellation for the resourceSupplier that will
// cause this one to not be invoked?
d -> Mono.fromRunnable(() -> decoder.cleanCurrentHttpData(true)));
}, 0) // There is no need of prefetch, we already have the buffers in the Reactor Netty inbound queue
.doFinally(sig -> decoder.destroy())));
}
final Mono<Void> withWebsocketSupport(String url,
WebsocketServerSpec websocketServerSpec,
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler) {
Objects.requireNonNull(websocketServerSpec, "websocketServerSpec");
Objects.requireNonNull(websocketHandler, "websocketHandler");
if (markSentHeaders()) {
WebsocketServerOperations ops = new WebsocketServerOperations(url, websocketServerSpec, this);
return FutureMono.from(ops.handshakerResult)
.doOnEach(signal -> {
if (!signal.hasError() && (websocketServerSpec.protocols() == null || ops.selectedSubprotocol() != null)) {
websocketHandler.apply(ops, ops)
.subscribe(new WebsocketSubscriber(ops, Context.of(signal.getContextView())));
}
});
}
else {
log.error(format(channel(), "Cannot enable websocket if headers have already been sent"));
}
return Mono.error(new IllegalStateException("Failed to upgrade to websocket"));
}
static final class WebsocketSubscriber implements CoreSubscriber<Void>, ChannelFutureListener {
final WebsocketServerOperations ops;
final Context context;
WebsocketSubscriber(WebsocketServerOperations ops, Context context) {
this.ops = ops;
this.context = context;
}
@Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE);
}
@Override
public void onNext(Void aVoid) {
}
@Override
public void onError(Throwable t) {
ops.onError(t);
}
@Override
public void operationComplete(ChannelFuture future) {
ops.terminate();
}
@Override
public void onComplete() {
if (ops.channel()
.isActive()) {
ops.sendCloseNow(new CloseWebSocketFrame(WebSocketCloseStatus.NORMAL_CLOSURE), this);
}
}
@Override
public Context currentContext() {
return context;
}
}
static final Logger log = Loggers.getLogger(HttpServerOperations.class);
final static AsciiString EVENT_STREAM = new AsciiString("text/event-stream");
static final BiPredicate<HttpServerRequest, HttpServerResponse> COMPRESSION_DISABLED = (req, res) -> false;
final static FullHttpResponse CONTINUE =
new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.CONTINUE,
EMPTY_BUFFER);
static final class FailedHttpServerRequest extends HttpServerOperations {
final HttpResponse customResponse;
FailedHttpServerRequest(
Connection c,
ConnectionObserver listener,
HttpRequest nettyRequest,
HttpResponse nettyResponse,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
boolean secure,
ZonedDateTime timestamp,
ConnectionInfo connectionInfo) {
super(c, listener, nettyRequest, null, connectionInfo,
ServerCookieDecoder.STRICT, ServerCookieEncoder.STRICT, DEFAULT_FORM_DECODER_SPEC, httpMessageLogFactory, isHttp2,
null, null, null, false, secure, timestamp);
this.customResponse = nettyResponse;
String tempPath = "";
try {
tempPath = resolvePath(nettyRequest.uri());
}
catch (RuntimeException e) {
tempPath = "/bad-request";
}
finally {
this.path = tempPath;
}
}
@Override
protected HttpMessage outboundHttpMessage() {
return customResponse;
}
@Override
public HttpResponseStatus status() {
return customResponse.status();
}
}
final class RequestTimeoutTask implements Runnable {
final ChannelHandlerContext ctx;
RequestTimeoutTask(ChannelHandlerContext ctx) {
this.ctx = ctx;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void run() {
if (ctx.channel().isActive() && !(isInboundCancelled() || isInboundDisposed())) {
onInboundError(RequestTimeoutException.INSTANCE);
//"FutureReturnValueIgnored" this is deliberate
ctx.close();
}
}
}
static final class TrailerHeaders extends DefaultHttpHeaders {
static final Set<String> DISALLOWED_TRAILER_HEADER_NAMES = new HashSet<>(14);
static {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A sender MUST NOT generate a trailer that contains a field necessary
// for message framing (e.g., Transfer-Encoding and Content-Length),
// routing (e.g., Host), request modifiers (e.g., controls and
// conditionals in Section 5 of [RFC7231]), authentication (e.g., see
// [RFC7235] and [RFC6265]), response control data (e.g., see Section
// 7.1 of [RFC7231]), or determining how to process the payload (e.g.,
// Content-Encoding, Content-Type, Content-Range, and Trailer).
DISALLOWED_TRAILER_HEADER_NAMES.add("age");
DISALLOWED_TRAILER_HEADER_NAMES.add("cache-control");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-length");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-range");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-type");
DISALLOWED_TRAILER_HEADER_NAMES.add("date");
DISALLOWED_TRAILER_HEADER_NAMES.add("expires");
DISALLOWED_TRAILER_HEADER_NAMES.add("location");
DISALLOWED_TRAILER_HEADER_NAMES.add("retry-after");
DISALLOWED_TRAILER_HEADER_NAMES.add("trailer");
DISALLOWED_TRAILER_HEADER_NAMES.add("transfer-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("vary");
DISALLOWED_TRAILER_HEADER_NAMES.add("warning");
}
TrailerHeaders(String declaredHeaderNames) {
super(true, new TrailerNameValidator(filterHeaderNames(declaredHeaderNames)));
}
static Set<String> filterHeaderNames(String declaredHeaderNames) {
Objects.requireNonNull(declaredHeaderNames, "declaredHeaderNames");
Set<String> result = new HashSet<>();
String[] names = declaredHeaderNames.split(",", -1);
for (String name : names) {
String trimmedStr = name.trim();
if (trimmedStr.isEmpty() ||
DISALLOWED_TRAILER_HEADER_NAMES.contains(trimmedStr.toLowerCase(Locale.ENGLISH))) {
continue;
}
result.add(trimmedStr);
}
return result;
}
static final class TrailerNameValidator implements DefaultHeaders.NameValidator<CharSequence> {
/**
* Contains the headers names specified with {@link HttpHeaderNames#TRAILER}
*/
final Set<String> declaredHeaderNames;
TrailerNameValidator(Set<String> declaredHeaderNames) {
this.declaredHeaderNames = declaredHeaderNames;
}
@Override
public void validateName(CharSequence name) {
if (!declaredHeaderNames.contains(name.toString())) {
throw new IllegalArgumentException("Trailer header name [" + name +
"] not declared with [Trailer] header, or it is not a valid trailer header name");
}
}
}
}
}
| violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | is it possible to avoid scheduling the timers in case we receive a Full H2 or H2C Request ?
if I'm correct, when receiving for example a Full H2/H2C GET request, then the msg may be an instance of **DefaultFullHttpRequest**, meaning that we will first arm timers here:
```
protected void onInboundNext(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
if (readTimeout != null) {
--> addHandlerFirst(NettyPipeline.ReadTimeoutHandler,
new ReadTimeoutHandler(readTimeout.toMillis(), TimeUnit.MILLISECONDS));
}
if (requestTimeout != null) {
--> requestTimeoutFuture =
ctx.executor().schedule(new RequestTimeoutTask(ctx), Math.max(requestTimeout.toMillis(), 1), TimeUnit.MILLISECONDS);
}
```
and then we will immediately cancel the timers here:
```
@Override
protected void onInboundNext(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
if (readTimeout != null) {
addHandlerFirst(NettyPipeline.ReadTimeoutHandler,
new ReadTimeoutHandler(readTimeout.toMillis(), TimeUnit.MILLISECONDS));
}
if (requestTimeout != null) {
requestTimeoutFuture =
ctx.executor().schedule(new RequestTimeoutTask(ctx), Math.max(requestTimeout.toMillis(), 1), TimeUnit.MILLISECONDS);
}
try {
listener().onStateChange(this, HttpServerState.REQUEST_RECEIVED);
}
catch (Exception e) {
onInboundError(e);
ReferenceCountUtil.release(msg);
return;
}
if (msg instanceof FullHttpRequest) {
FullHttpRequest request = (FullHttpRequest) msg;
if (request.content().readableBytes() > 0) {
super.onInboundNext(ctx, msg);
}
else {
request.release();
}
if (isHttp2()) {
--> removeHandler(NettyPipeline.ReadTimeoutHandler);
if (requestTimeoutFuture != null) {
--> requestTimeoutFuture.cancel(false);
requestTimeoutFuture = null;
}
```
maybe we could avoid scheduling and cancelling the timers in this case ? | pderop | 10 |
reactor/reactor-netty | 2,836 | `HttpServer`: Add API for read related timeouts | Fixes #2770 | null | 2023-06-19 06:36:05+00:00 | 2023-06-20 16:47:29+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/HttpServerOperations.java | /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.ZonedDateTime;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.atomic.AtomicLong;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.DefaultHeaders;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.DefaultHttpHeaders;
import io.netty.handler.codec.http.DefaultHttpResponse;
import io.netty.handler.codec.http.DefaultLastHttpContent;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpContent;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpObject;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.HttpResponse;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpUtil;
import io.netty.handler.codec.http.HttpVersion;
import io.netty.handler.codec.http.LastHttpContent;
import io.netty.handler.codec.http.TooLongHttpHeaderException;
import io.netty.handler.codec.http.TooLongHttpLineException;
import io.netty.handler.codec.http.cookie.Cookie;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.codec.http.multipart.HttpData;
import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
import io.netty.handler.codec.http.websocketx.CloseWebSocketFrame;
import io.netty.handler.codec.http.websocketx.WebSocketCloseStatus;
import io.netty.util.AsciiString;
import io.netty.util.ReferenceCountUtil;
import org.reactivestreams.Publisher;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.FutureMono;
import reactor.netty.NettyOutbound;
import reactor.netty.NettyPipeline;
import reactor.netty.ReactorNetty;
import reactor.netty.channel.AbortedException;
import reactor.netty.channel.ChannelOperations;
import reactor.netty.http.HttpOperations;
import reactor.netty.http.logging.HttpMessageArgProviderFactory;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.websocket.WebsocketInbound;
import reactor.netty.http.websocket.WebsocketOutbound;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static io.netty.buffer.Unpooled.EMPTY_BUFFER;
import static io.netty.handler.codec.http.HttpUtil.isTransferEncodingChunked;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.http.server.HttpServerFormDecoderProvider.DEFAULT_FORM_DECODER_SPEC;
import static reactor.netty.http.server.HttpServerState.REQUEST_DECODING_FAILED;
/**
* Conversion between Netty types and Reactor types ({@link HttpOperations}.
*
* @author Stephane Maldini1
*/
class HttpServerOperations extends HttpOperations<HttpServerRequest, HttpServerResponse>
implements HttpServerRequest, HttpServerResponse {
final BiPredicate<HttpServerRequest, HttpServerResponse> configuredCompressionPredicate;
final ConnectionInfo connectionInfo;
final ServerCookieDecoder cookieDecoder;
final ServerCookieEncoder cookieEncoder;
final ServerCookies cookieHolder;
final HttpServerFormDecoderProvider formDecoderProvider;
final boolean isHttp2;
final BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle;
final HttpRequest nettyRequest;
final HttpResponse nettyResponse;
final HttpHeaders responseHeaders;
final String scheme;
final ZonedDateTime timestamp;
BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate;
Function<? super String, Map<String, String>> paramsResolver;
String path;
Consumer<? super HttpHeaders> trailerHeadersConsumer;
volatile Context currentContext;
HttpServerOperations(HttpServerOperations replaced) {
super(replaced);
this.compressionPredicate = replaced.compressionPredicate;
this.configuredCompressionPredicate = replaced.configuredCompressionPredicate;
this.connectionInfo = replaced.connectionInfo;
this.cookieDecoder = replaced.cookieDecoder;
this.cookieEncoder = replaced.cookieEncoder;
this.cookieHolder = replaced.cookieHolder;
this.currentContext = replaced.currentContext;
this.formDecoderProvider = replaced.formDecoderProvider;
this.isHttp2 = replaced.isHttp2;
this.mapHandle = replaced.mapHandle;
this.nettyRequest = replaced.nettyRequest;
this.nettyResponse = replaced.nettyResponse;
this.paramsResolver = replaced.paramsResolver;
this.path = replaced.path;
this.responseHeaders = replaced.responseHeaders;
this.scheme = replaced.scheme;
this.timestamp = replaced.timestamp;
this.trailerHeadersConsumer = replaced.trailerHeadersConsumer;
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
boolean secured,
ZonedDateTime timestamp) {
this(c, listener, nettyRequest, compressionPredicate, connectionInfo, decoder, encoder, formDecoderProvider,
httpMessageLogFactory, isHttp2, mapHandle, true, secured, timestamp);
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
boolean resolvePath,
boolean secured,
ZonedDateTime timestamp) {
super(c, listener, httpMessageLogFactory);
this.compressionPredicate = compressionPredicate;
this.configuredCompressionPredicate = compressionPredicate;
this.connectionInfo = connectionInfo;
this.cookieDecoder = decoder;
this.cookieEncoder = encoder;
this.cookieHolder = ServerCookies.newServerRequestHolder(nettyRequest.headers(), decoder);
this.currentContext = Context.empty();
this.formDecoderProvider = formDecoderProvider;
this.isHttp2 = isHttp2;
this.mapHandle = mapHandle;
this.nettyRequest = nettyRequest;
this.nettyResponse = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
if (resolvePath) {
this.path = resolvePath(nettyRequest.uri());
}
else {
this.path = null;
}
this.responseHeaders = nettyResponse.headers();
this.responseHeaders.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
this.scheme = secured ? "https" : "http";
this.timestamp = timestamp;
}
@Override
public NettyOutbound sendHeaders() {
if (hasSentHeaders()) {
return this;
}
return then(Mono.empty());
}
@Override
public HttpServerOperations withConnection(Consumer<? super Connection> withConnection) {
Objects.requireNonNull(withConnection, "withConnection");
withConnection.accept(this);
return this;
}
@Override
protected HttpMessage newFullBodyMessage(ByteBuf body) {
HttpResponse res =
new DefaultFullHttpResponse(version(), status(), body);
if (!HttpMethod.HEAD.equals(method())) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
int code = status().code();
if (!(HttpResponseStatus.NOT_MODIFIED.code() == code ||
HttpResponseStatus.NO_CONTENT.code() == code)) {
if (HttpUtil.getContentLength(nettyResponse, -1) == -1) {
responseHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, body.readableBytes());
}
}
}
// For HEAD requests:
// - if there is Transfer-Encoding and Content-Length, Transfer-Encoding will be removed
// - if there is only Transfer-Encoding, it will be kept and not replaced by
// Content-Length: body.readableBytes()
// For HEAD requests, the I/O handler may decide to provide only the headers and complete
// the response. In that case body will be EMPTY_BUFFER and if we set Content-Length: 0,
// this will not be correct
// https://github.com/reactor/reactor-netty/issues/1333
else if (HttpUtil.getContentLength(nettyResponse, -1) != -1) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
}
res.headers().set(responseHeaders);
return res;
}
@Override
public HttpServerResponse addCookie(Cookie cookie) {
if (!hasSentHeaders()) {
this.responseHeaders.add(HttpHeaderNames.SET_COOKIE,
cookieEncoder.encode(cookie));
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse addHeader(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.add(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerOperations chunkedTransfer(boolean chunked) {
if (!hasSentHeaders() && isTransferEncodingChunked(nettyResponse) != chunked) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
HttpUtil.setTransferEncodingChunked(nettyResponse, chunked);
}
return this;
}
@Override
public Map<CharSequence, Set<Cookie>> cookies() {
if (cookieHolder != null) {
return cookieHolder.getCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Map<CharSequence, List<Cookie>> allCookies() {
if (cookieHolder != null) {
return cookieHolder.getAllCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Context currentContext() {
return currentContext;
}
@Override
public HttpServerResponse header(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.set(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse headers(HttpHeaders headers) {
if (!hasSentHeaders()) {
this.responseHeaders.set(headers);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public boolean isFormUrlencoded() {
CharSequence mimeType = HttpUtil.getMimeType(nettyRequest);
return mimeType != null &&
HttpHeaderValues.APPLICATION_X_WWW_FORM_URLENCODED.contentEqualsIgnoreCase(mimeType.toString().trim());
}
@Override
public boolean isKeepAlive() {
return HttpUtil.isKeepAlive(nettyRequest);
}
@Override
public boolean isMultipart() {
return HttpPostRequestDecoder.isMultipart(nettyRequest);
}
@Override
public boolean isWebsocket() {
return get(channel()) instanceof WebsocketServerOperations;
}
final boolean isHttp2() {
return isHttp2;
}
@Override
public HttpServerResponse keepAlive(boolean keepAlive) {
HttpUtil.setKeepAlive(nettyResponse, keepAlive);
return this;
}
@Override
public HttpMethod method() {
return nettyRequest.method();
}
@Override
@Nullable
public String param(CharSequence key) {
Objects.requireNonNull(key, "key");
Map<String, String> params = null;
if (paramsResolver != null) {
params = this.paramsResolver.apply(uri());
}
return null != params ? params.get(key.toString()) : null;
}
@Override
@Nullable
public Map<String, String> params() {
return null != paramsResolver ? paramsResolver.apply(uri()) : null;
}
@Override
public HttpServerRequest paramsResolver(Function<? super String, Map<String, String>> paramsResolver) {
this.paramsResolver = paramsResolver;
return this;
}
@Override
public Flux<HttpData> receiveForm() {
return receiveFormInternal(formDecoderProvider);
}
@Override
public Flux<HttpData> receiveForm(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider config = builder.build();
return receiveFormInternal(config);
}
@Override
public Flux<?> receiveObject() {
// Handle the 'Expect: 100-continue' header if necessary.
// TODO: Respond with 413 Request Entity Too Large
// and discard the traffic or close the connection.
// No need to notify the upstream handlers - just log.
// If decoding a response, just throw an error.
if (HttpUtil.is100ContinueExpected(nettyRequest)) {
return FutureMono.deferFuture(() -> {
if (!hasSentHeaders()) {
return channel().writeAndFlush(CONTINUE);
}
return channel().newSucceededFuture();
})
.thenMany(super.receiveObject());
}
else {
return super.receiveObject();
}
}
@Override
@Nullable
public InetSocketAddress hostAddress() {
return this.connectionInfo.getHostAddress();
}
final SocketAddress hostSocketAddress() {
return this.connectionInfo.hostAddress;
}
@Override
@Nullable
public SocketAddress connectionHostAddress() {
return channel().localAddress();
}
@Override
@Nullable
public InetSocketAddress remoteAddress() {
return this.connectionInfo.getRemoteAddress();
}
final SocketAddress remoteSocketAddress() {
return this.connectionInfo.remoteAddress;
}
@Override
@Nullable
public SocketAddress connectionRemoteAddress() {
return channel().remoteAddress();
}
@Override
public HttpHeaders requestHeaders() {
if (nettyRequest != null) {
return nettyRequest.headers();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String scheme() {
return this.connectionInfo.getScheme();
}
@Override
public String connectionScheme() {
return scheme;
}
@Override
public String hostName() {
return connectionInfo.getHostName();
}
@Override
public int hostPort() {
return connectionInfo.getHostPort();
}
@Override
public HttpHeaders responseHeaders() {
return responseHeaders;
}
@Override
public String protocol() {
return nettyRequest.protocolVersion().text();
}
@Override
public ZonedDateTime timestamp() {
return timestamp;
}
@Override
public Mono<Void> send() {
return FutureMono.deferFuture(() -> markSentHeaderAndBody() ?
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER)) :
channel().newSucceededFuture());
}
@Override
public NettyOutbound sendFile(Path file) {
try {
return sendFile(file, 0L, Files.size(file));
}
catch (IOException e) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Path not resolved"), e);
}
return then(sendNotFound());
}
}
@Override
public Mono<Void> sendNotFound() {
return this.status(HttpResponseStatus.NOT_FOUND)
.send();
}
@Override
public Mono<Void> sendRedirect(String location) {
Objects.requireNonNull(location, "location");
return this.status(HttpResponseStatus.FOUND)
.header(HttpHeaderNames.LOCATION, location)
.send();
}
/**
* @return the Transfer setting SSE for this http connection (e.g. event-stream)
*/
@Override
public HttpServerResponse sse() {
header(HttpHeaderNames.CONTENT_TYPE, EVENT_STREAM);
return this;
}
@Override
public HttpResponseStatus status() {
return this.nettyResponse.status();
}
@Override
public HttpServerResponse status(HttpResponseStatus status) {
if (!hasSentHeaders()) {
this.nettyResponse.setStatus(status);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse trailerHeaders(Consumer<? super HttpHeaders> trailerHeaders) {
this.trailerHeadersConsumer = Objects.requireNonNull(trailerHeaders, "trailerHeaders");
return this;
}
@Override
public Mono<Void> sendWebsocket(
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler,
WebsocketServerSpec configurer) {
return withWebsocketSupport(uri(), configurer, websocketHandler);
}
@Override
public String uri() {
if (nettyRequest != null) {
return nettyRequest.uri();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String fullPath() {
if (path != null) {
return path;
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpVersion version() {
if (nettyRequest != null) {
return nettyRequest.protocolVersion();
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpServerResponse compression(boolean compress) {
compressionPredicate = compress ? configuredCompressionPredicate : COMPRESSION_DISABLED;
if (!compress) {
removeHandler(NettyPipeline.CompressionHandler);
}
else if (channel().pipeline()
.get(NettyPipeline.CompressionHandler) == null) {
SimpleCompressionHandler handler = new SimpleCompressionHandler();
try {
//Do not invoke handler.channelRead as it will trigger ctx.fireChannelRead
handler.decode(channel().pipeline().context(NettyPipeline.ReactiveBridge), nettyRequest);
addHandlerFirst(NettyPipeline.CompressionHandler, handler);
}
catch (Throwable e) {
log.error(format(channel(), ""), e);
}
}
return this;
}
@Override
protected void onInboundNext(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
try {
listener().onStateChange(this, HttpServerState.REQUEST_RECEIVED);
}
catch (Exception e) {
onInboundError(e);
ReferenceCountUtil.release(msg);
return;
}
if (msg instanceof FullHttpRequest) {
FullHttpRequest request = (FullHttpRequest) msg;
if (request.content().readableBytes() > 0) {
super.onInboundNext(ctx, msg);
}
else {
request.release();
}
if (isHttp2()) {
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
return;
}
if (msg instanceof HttpContent) {
if (msg != LastHttpContent.EMPTY_LAST_CONTENT) {
super.onInboundNext(ctx, msg);
}
if (msg instanceof LastHttpContent) {
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
else {
super.onInboundNext(ctx, msg);
}
}
@Override
protected void onInboundClose() {
discardWhenNoReceiver();
if (!(isInboundCancelled() || isInboundDisposed())) {
onInboundError(new AbortedException("Connection has been closed"));
}
terminate();
}
@Override
protected void afterMarkSentHeaders() {
if (compressionPredicate != null && compressionPredicate.test(this, this)) {
compression(true);
}
}
@Override
protected void beforeMarkSentHeaders() {
//noop
}
@Override
protected boolean isContentAlwaysEmpty() {
int code = status().code();
if (HttpResponseStatus.NOT_MODIFIED.code() == code) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING)
.remove(HttpHeaderNames.CONTENT_LENGTH);
return true;
}
return HttpResponseStatus.NO_CONTENT.code() == code ||
HttpResponseStatus.RESET_CONTENT.code() == code;
}
@Override
protected void onHeadersSent() {
//noop
}
@Override
protected void onOutboundComplete() {
if (isWebsocket()) {
return;
}
final ChannelFuture f;
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Last HTTP response frame"));
}
if (markSentHeaderAndBody()) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "No sendHeaders() called before complete, sending " +
"zero-length header"));
}
f = channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER));
}
else if (markSentBody()) {
HttpHeaders trailerHeaders = null;
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A trailer allows the sender to include additional fields at the end
// of a chunked message in order to supply metadata that might be
// dynamically generated while the message body is sent, such as a
// message integrity check, digital signature, or post-processing
// status.
if (trailerHeadersConsumer != null && isTransferEncodingChunked(nettyResponse)) {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.4
// When a message includes a message body encoded with the chunked
// transfer coding and the sender desires to send metadata in the form
// of trailer fields at the end of the message, the sender SHOULD
// generate a Trailer header field before the message body to indicate
// which fields will be present in the trailers.
String declaredHeaderNames = responseHeaders.get(HttpHeaderNames.TRAILER);
if (declaredHeaderNames != null) {
trailerHeaders = new TrailerHeaders(declaredHeaderNames);
try {
trailerHeadersConsumer.accept(trailerHeaders);
}
catch (IllegalArgumentException e) {
// A sender MUST NOT generate a trailer when header names are
// HttpServerOperations.TrailerHeaders.DISALLOWED_TRAILER_HEADER_NAMES
log.error(format(channel(), "Cannot apply trailer headers [{}]"), declaredHeaderNames, e);
}
}
}
f = channel().writeAndFlush(trailerHeaders != null && !trailerHeaders.isEmpty() ?
new DefaultLastHttpContent(Unpooled.buffer(0), trailerHeaders) :
LastHttpContent.EMPTY_LAST_CONTENT);
}
else {
discard();
return;
}
f.addListener(s -> {
discard();
if (!s.isSuccess() && log.isDebugEnabled()) {
log.debug(format(channel(), "Failed flushing last frame"), s.cause());
}
});
}
static void cleanHandlerTerminate(Channel ch) {
ChannelOperations<?, ?> ops = get(ch);
if (ops == null) {
return;
}
ops.discard();
//Try to defer the disposing to leave a chance for any synchronous complete following this callback
if (!ops.isSubscriptionDisposed()) {
ch.eventLoop()
.execute(((HttpServerOperations) ops)::terminate);
}
else {
//if already disposed, we can immediately call terminate
((HttpServerOperations) ops).terminate();
}
}
static long requestsCounter(Channel channel) {
HttpServerOperations ops = Connection.from(channel).as(HttpServerOperations.class);
if (ops == null) {
return -1;
}
return ((AtomicLong) ops.connection()).get();
}
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
sendDecodingFailures(ctx, listener, secure, t, msg, httpMessageLogFactory, false, timestamp, connectionInfo, remoteAddress);
}
@SuppressWarnings("FutureReturnValueIgnored")
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
Throwable cause = t.getCause() != null ? t.getCause() : t;
if (log.isWarnEnabled()) {
log.warn(format(ctx.channel(), "Decoding failed: {}"),
msg instanceof HttpObject ?
httpMessageLogFactory.warn(HttpMessageArgProviderFactory.create(msg)) : msg);
}
ReferenceCountUtil.release(msg);
final HttpResponseStatus status;
if (cause instanceof TooLongHttpLineException) {
status = HttpResponseStatus.REQUEST_URI_TOO_LONG;
}
else if (cause instanceof TooLongHttpHeaderException) {
status = HttpResponseStatus.REQUEST_HEADER_FIELDS_TOO_LARGE;
}
else {
status = HttpResponseStatus.BAD_REQUEST;
}
HttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, status);
response.headers()
.setInt(HttpHeaderNames.CONTENT_LENGTH, 0)
.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
Connection ops = ChannelOperations.get(ctx.channel());
if (ops == null) {
Connection conn = Connection.from(ctx.channel());
if (msg instanceof HttpRequest) {
ops = new FailedHttpServerRequest(conn, listener, (HttpRequest) msg, response, httpMessageLogFactory, isHttp2,
secure, timestamp == null ? ZonedDateTime.now(ReactorNetty.ZONE_ID_SYSTEM) : timestamp,
connectionInfo == null ? new ConnectionInfo(ctx.channel().localAddress(), remoteAddress, secure) : connectionInfo);
ops.bind();
}
else {
ops = conn;
}
}
//"FutureReturnValueIgnored" this is deliberate
ctx.channel().writeAndFlush(response);
listener.onStateChange(ops, REQUEST_DECODING_FAILED);
}
/**
* There is no need of invoking {@link #discard()}, the inbound will
* be canceled on channel inactive event if there is no subscriber available
*
* @param err the {@link Throwable} cause
*/
@Override
protected void onOutboundError(Throwable err) {
if (!channel().isActive()) {
super.onOutboundError(err);
return;
}
if (markSentHeaders()) {
log.error(format(channel(), "Error starting response. Replying error status"), err);
nettyResponse.setStatus(HttpResponseStatus.INTERNAL_SERVER_ERROR);
responseHeaders.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER))
.addListener(ChannelFutureListener.CLOSE);
return;
}
markSentBody();
log.error(format(channel(), "Error finishing response. Closing connection"), err);
channel().writeAndFlush(EMPTY_BUFFER)
.addListener(ChannelFutureListener.CLOSE);
}
@Override
protected HttpMessage outboundHttpMessage() {
return nettyResponse;
}
final Flux<HttpData> receiveFormInternal(HttpServerFormDecoderProvider config) {
boolean isMultipart = isMultipart();
if (!Objects.equals(method(), HttpMethod.POST) || !(isFormUrlencoded() || isMultipart)) {
return Flux.error(new IllegalStateException(
"Request is not POST or does not have Content-Type " +
"with value 'application/x-www-form-urlencoded' or 'multipart/form-data'"));
}
return Flux.defer(() ->
config.newHttpPostRequestDecoder(nettyRequest, isMultipart).flatMapMany(decoder ->
receiveObject() // receiveContent uses filter operator, this operator buffers, but we don't want it
.concatMap(object -> {
if (!(object instanceof HttpContent)) {
return Mono.empty();
}
HttpContent httpContent = (HttpContent) object;
if (config.maxInMemorySize > -1) {
httpContent.retain();
}
return config.maxInMemorySize == -1 ?
Flux.using(
() -> decoder.offer(httpContent),
d -> Flux.fromIterable(decoder.currentHttpData(!config.streaming)),
d -> decoder.cleanCurrentHttpData(!config.streaming)) :
Flux.usingWhen(
Mono.fromCallable(() -> decoder.offer(httpContent))
.subscribeOn(config.scheduler)
.doFinally(sig -> httpContent.release()),
d -> Flux.fromIterable(decoder.currentHttpData(true)),
// FIXME Can we have cancellation for the resourceSupplier that will
// cause this one to not be invoked?
d -> Mono.fromRunnable(() -> decoder.cleanCurrentHttpData(true)));
}, 0) // There is no need of prefetch, we already have the buffers in the Reactor Netty inbound queue
.doFinally(sig -> decoder.destroy())));
}
final Mono<Void> withWebsocketSupport(String url,
WebsocketServerSpec websocketServerSpec,
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler) {
Objects.requireNonNull(websocketServerSpec, "websocketServerSpec");
Objects.requireNonNull(websocketHandler, "websocketHandler");
if (markSentHeaders()) {
WebsocketServerOperations ops = new WebsocketServerOperations(url, websocketServerSpec, this);
return FutureMono.from(ops.handshakerResult)
.doOnEach(signal -> {
if (!signal.hasError() && (websocketServerSpec.protocols() == null || ops.selectedSubprotocol() != null)) {
websocketHandler.apply(ops, ops)
.subscribe(new WebsocketSubscriber(ops, Context.of(signal.getContextView())));
}
});
}
else {
log.error(format(channel(), "Cannot enable websocket if headers have already been sent"));
}
return Mono.error(new IllegalStateException("Failed to upgrade to websocket"));
}
static final class WebsocketSubscriber implements CoreSubscriber<Void>, ChannelFutureListener {
final WebsocketServerOperations ops;
final Context context;
WebsocketSubscriber(WebsocketServerOperations ops, Context context) {
this.ops = ops;
this.context = context;
}
@Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE);
}
@Override
public void onNext(Void aVoid) {
}
@Override
public void onError(Throwable t) {
ops.onError(t);
}
@Override
public void operationComplete(ChannelFuture future) {
ops.terminate();
}
@Override
public void onComplete() {
if (ops.channel()
.isActive()) {
ops.sendCloseNow(new CloseWebSocketFrame(WebSocketCloseStatus.NORMAL_CLOSURE), this);
}
}
@Override
public Context currentContext() {
return context;
}
}
static final Logger log = Loggers.getLogger(HttpServerOperations.class);
final static AsciiString EVENT_STREAM = new AsciiString("text/event-stream");
static final BiPredicate<HttpServerRequest, HttpServerResponse> COMPRESSION_DISABLED = (req, res) -> false;
final static FullHttpResponse CONTINUE =
new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.CONTINUE,
EMPTY_BUFFER);
static final class FailedHttpServerRequest extends HttpServerOperations {
final HttpResponse customResponse;
FailedHttpServerRequest(
Connection c,
ConnectionObserver listener,
HttpRequest nettyRequest,
HttpResponse nettyResponse,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
boolean secure,
ZonedDateTime timestamp,
ConnectionInfo connectionInfo) {
super(c, listener, nettyRequest, null, connectionInfo,
ServerCookieDecoder.STRICT, ServerCookieEncoder.STRICT, DEFAULT_FORM_DECODER_SPEC, httpMessageLogFactory, isHttp2,
null, false, secure, timestamp);
this.customResponse = nettyResponse;
String tempPath = "";
try {
tempPath = resolvePath(nettyRequest.uri());
}
catch (RuntimeException e) {
tempPath = "/bad-request";
}
finally {
this.path = tempPath;
}
}
@Override
protected HttpMessage outboundHttpMessage() {
return customResponse;
}
@Override
public HttpResponseStatus status() {
return customResponse.status();
}
}
static final class TrailerHeaders extends DefaultHttpHeaders {
static final Set<String> DISALLOWED_TRAILER_HEADER_NAMES = new HashSet<>(14);
static {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A sender MUST NOT generate a trailer that contains a field necessary
// for message framing (e.g., Transfer-Encoding and Content-Length),
// routing (e.g., Host), request modifiers (e.g., controls and
// conditionals in Section 5 of [RFC7231]), authentication (e.g., see
// [RFC7235] and [RFC6265]), response control data (e.g., see Section
// 7.1 of [RFC7231]), or determining how to process the payload (e.g.,
// Content-Encoding, Content-Type, Content-Range, and Trailer).
DISALLOWED_TRAILER_HEADER_NAMES.add("age");
DISALLOWED_TRAILER_HEADER_NAMES.add("cache-control");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-length");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-range");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-type");
DISALLOWED_TRAILER_HEADER_NAMES.add("date");
DISALLOWED_TRAILER_HEADER_NAMES.add("expires");
DISALLOWED_TRAILER_HEADER_NAMES.add("location");
DISALLOWED_TRAILER_HEADER_NAMES.add("retry-after");
DISALLOWED_TRAILER_HEADER_NAMES.add("trailer");
DISALLOWED_TRAILER_HEADER_NAMES.add("transfer-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("vary");
DISALLOWED_TRAILER_HEADER_NAMES.add("warning");
}
TrailerHeaders(String declaredHeaderNames) {
super(true, new TrailerNameValidator(filterHeaderNames(declaredHeaderNames)));
}
static Set<String> filterHeaderNames(String declaredHeaderNames) {
Objects.requireNonNull(declaredHeaderNames, "declaredHeaderNames");
Set<String> result = new HashSet<>();
String[] names = declaredHeaderNames.split(",", -1);
for (String name : names) {
String trimmedStr = name.trim();
if (trimmedStr.isEmpty() ||
DISALLOWED_TRAILER_HEADER_NAMES.contains(trimmedStr.toLowerCase(Locale.ENGLISH))) {
continue;
}
result.add(trimmedStr);
}
return result;
}
static final class TrailerNameValidator implements DefaultHeaders.NameValidator<CharSequence> {
/**
* Contains the headers names specified with {@link HttpHeaderNames#TRAILER}
*/
final Set<String> declaredHeaderNames;
TrailerNameValidator(Set<String> declaredHeaderNames) {
this.declaredHeaderNames = declaredHeaderNames;
}
@Override
public void validateName(CharSequence name) {
if (!declaredHeaderNames.contains(name.toString())) {
throw new IllegalArgumentException("Trailer header name [" + name +
"] not declared with [Trailer] header, or it is not a valid trailer header name");
}
}
}
}
}
| /*
* Copyright (c) 2011-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.nio.file.Files;
import java.nio.file.Path;
import java.time.Duration;
import java.time.ZonedDateTime;
import java.util.HashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import java.util.function.BiFunction;
import java.util.function.BiPredicate;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.Channel;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.DefaultHeaders;
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.DefaultHttpHeaders;
import io.netty.handler.codec.http.DefaultHttpResponse;
import io.netty.handler.codec.http.DefaultLastHttpContent;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpContent;
import io.netty.handler.codec.http.HttpHeaderNames;
import io.netty.handler.codec.http.HttpHeaderValues;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpMessage;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpObject;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.handler.codec.http.HttpResponse;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpUtil;
import io.netty.handler.codec.http.HttpVersion;
import io.netty.handler.codec.http.LastHttpContent;
import io.netty.handler.codec.http.TooLongHttpHeaderException;
import io.netty.handler.codec.http.TooLongHttpLineException;
import io.netty.handler.codec.http.cookie.Cookie;
import io.netty.handler.codec.http.cookie.ServerCookieDecoder;
import io.netty.handler.codec.http.cookie.ServerCookieEncoder;
import io.netty.handler.codec.http.multipart.HttpData;
import io.netty.handler.codec.http.multipart.HttpPostRequestDecoder;
import io.netty.handler.codec.http.websocketx.CloseWebSocketFrame;
import io.netty.handler.codec.http.websocketx.WebSocketCloseStatus;
import io.netty.handler.timeout.ReadTimeoutHandler;
import io.netty.util.AsciiString;
import io.netty.util.ReferenceCountUtil;
import org.reactivestreams.Publisher;
import org.reactivestreams.Subscription;
import reactor.core.CoreSubscriber;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.Connection;
import reactor.netty.ConnectionObserver;
import reactor.netty.FutureMono;
import reactor.netty.NettyOutbound;
import reactor.netty.NettyPipeline;
import reactor.netty.ReactorNetty;
import reactor.netty.channel.AbortedException;
import reactor.netty.channel.ChannelOperations;
import reactor.netty.http.HttpOperations;
import reactor.netty.http.logging.HttpMessageArgProviderFactory;
import reactor.netty.http.logging.HttpMessageLogFactory;
import reactor.netty.http.websocket.WebsocketInbound;
import reactor.netty.http.websocket.WebsocketOutbound;
import reactor.util.Logger;
import reactor.util.Loggers;
import reactor.util.annotation.Nullable;
import reactor.util.context.Context;
import static io.netty.buffer.Unpooled.EMPTY_BUFFER;
import static io.netty.handler.codec.http.HttpUtil.isTransferEncodingChunked;
import static reactor.netty.ReactorNetty.format;
import static reactor.netty.http.server.HttpServerFormDecoderProvider.DEFAULT_FORM_DECODER_SPEC;
import static reactor.netty.http.server.HttpServerState.REQUEST_DECODING_FAILED;
/**
* Conversion between Netty types and Reactor types ({@link HttpOperations}.
*
* @author Stephane Maldini1
*/
class HttpServerOperations extends HttpOperations<HttpServerRequest, HttpServerResponse>
implements HttpServerRequest, HttpServerResponse {
final BiPredicate<HttpServerRequest, HttpServerResponse> configuredCompressionPredicate;
final ConnectionInfo connectionInfo;
final ServerCookieDecoder cookieDecoder;
final ServerCookieEncoder cookieEncoder;
final ServerCookies cookieHolder;
final HttpServerFormDecoderProvider formDecoderProvider;
final boolean isHttp2;
final BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle;
final HttpRequest nettyRequest;
final HttpResponse nettyResponse;
final Duration readTimeout;
final Duration requestTimeout;
final HttpHeaders responseHeaders;
final String scheme;
final ZonedDateTime timestamp;
BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate;
Function<? super String, Map<String, String>> paramsResolver;
String path;
Future<?> requestTimeoutFuture;
Consumer<? super HttpHeaders> trailerHeadersConsumer;
volatile Context currentContext;
HttpServerOperations(HttpServerOperations replaced) {
super(replaced);
this.compressionPredicate = replaced.compressionPredicate;
this.configuredCompressionPredicate = replaced.configuredCompressionPredicate;
this.connectionInfo = replaced.connectionInfo;
this.cookieDecoder = replaced.cookieDecoder;
this.cookieEncoder = replaced.cookieEncoder;
this.cookieHolder = replaced.cookieHolder;
this.currentContext = replaced.currentContext;
this.formDecoderProvider = replaced.formDecoderProvider;
this.isHttp2 = replaced.isHttp2;
this.mapHandle = replaced.mapHandle;
this.nettyRequest = replaced.nettyRequest;
this.nettyResponse = replaced.nettyResponse;
this.paramsResolver = replaced.paramsResolver;
this.path = replaced.path;
this.readTimeout = replaced.readTimeout;
this.requestTimeout = replaced.requestTimeout;
this.responseHeaders = replaced.responseHeaders;
this.scheme = replaced.scheme;
this.timestamp = replaced.timestamp;
this.trailerHeadersConsumer = replaced.trailerHeadersConsumer;
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
@Nullable Duration readTimeout,
@Nullable Duration requestTimeout,
boolean secured,
ZonedDateTime timestamp) {
this(c, listener, nettyRequest, compressionPredicate, connectionInfo, decoder, encoder, formDecoderProvider,
httpMessageLogFactory, isHttp2, mapHandle, readTimeout, requestTimeout, true, secured, timestamp);
}
HttpServerOperations(Connection c, ConnectionObserver listener, HttpRequest nettyRequest,
@Nullable BiPredicate<HttpServerRequest, HttpServerResponse> compressionPredicate,
ConnectionInfo connectionInfo,
ServerCookieDecoder decoder,
ServerCookieEncoder encoder,
HttpServerFormDecoderProvider formDecoderProvider,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable BiFunction<? super Mono<Void>, ? super Connection, ? extends Mono<Void>> mapHandle,
@Nullable Duration readTimeout,
@Nullable Duration requestTimeout,
boolean resolvePath,
boolean secured,
ZonedDateTime timestamp) {
super(c, listener, httpMessageLogFactory);
this.compressionPredicate = compressionPredicate;
this.configuredCompressionPredicate = compressionPredicate;
this.connectionInfo = connectionInfo;
this.cookieDecoder = decoder;
this.cookieEncoder = encoder;
this.cookieHolder = ServerCookies.newServerRequestHolder(nettyRequest.headers(), decoder);
this.currentContext = Context.empty();
this.formDecoderProvider = formDecoderProvider;
this.isHttp2 = isHttp2;
this.mapHandle = mapHandle;
this.nettyRequest = nettyRequest;
this.nettyResponse = new DefaultHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
if (resolvePath) {
this.path = resolvePath(nettyRequest.uri());
}
else {
this.path = null;
}
this.readTimeout = readTimeout;
this.requestTimeout = requestTimeout;
this.responseHeaders = nettyResponse.headers();
this.responseHeaders.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
this.scheme = secured ? "https" : "http";
this.timestamp = timestamp;
}
@Override
public NettyOutbound sendHeaders() {
if (hasSentHeaders()) {
return this;
}
return then(Mono.empty());
}
@Override
public HttpServerOperations withConnection(Consumer<? super Connection> withConnection) {
Objects.requireNonNull(withConnection, "withConnection");
withConnection.accept(this);
return this;
}
@Override
protected HttpMessage newFullBodyMessage(ByteBuf body) {
HttpResponse res =
new DefaultFullHttpResponse(version(), status(), body);
if (!HttpMethod.HEAD.equals(method())) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
int code = status().code();
if (!(HttpResponseStatus.NOT_MODIFIED.code() == code ||
HttpResponseStatus.NO_CONTENT.code() == code)) {
if (HttpUtil.getContentLength(nettyResponse, -1) == -1) {
responseHeaders.setInt(HttpHeaderNames.CONTENT_LENGTH, body.readableBytes());
}
}
}
// For HEAD requests:
// - if there is Transfer-Encoding and Content-Length, Transfer-Encoding will be removed
// - if there is only Transfer-Encoding, it will be kept and not replaced by
// Content-Length: body.readableBytes()
// For HEAD requests, the I/O handler may decide to provide only the headers and complete
// the response. In that case body will be EMPTY_BUFFER and if we set Content-Length: 0,
// this will not be correct
// https://github.com/reactor/reactor-netty/issues/1333
else if (HttpUtil.getContentLength(nettyResponse, -1) != -1) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
}
res.headers().set(responseHeaders);
return res;
}
@Override
public HttpServerResponse addCookie(Cookie cookie) {
if (!hasSentHeaders()) {
this.responseHeaders.add(HttpHeaderNames.SET_COOKIE,
cookieEncoder.encode(cookie));
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse addHeader(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.add(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerOperations chunkedTransfer(boolean chunked) {
if (!hasSentHeaders() && isTransferEncodingChunked(nettyResponse) != chunked) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING);
HttpUtil.setTransferEncodingChunked(nettyResponse, chunked);
}
return this;
}
@Override
public Map<CharSequence, Set<Cookie>> cookies() {
if (cookieHolder != null) {
return cookieHolder.getCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Map<CharSequence, List<Cookie>> allCookies() {
if (cookieHolder != null) {
return cookieHolder.getAllCachedCookies();
}
throw new IllegalStateException("request not parsed");
}
@Override
public Context currentContext() {
return currentContext;
}
@Override
public HttpServerResponse header(CharSequence name, CharSequence value) {
if (!hasSentHeaders()) {
this.responseHeaders.set(name, value);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse headers(HttpHeaders headers) {
if (!hasSentHeaders()) {
this.responseHeaders.set(headers);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public boolean isFormUrlencoded() {
CharSequence mimeType = HttpUtil.getMimeType(nettyRequest);
return mimeType != null &&
HttpHeaderValues.APPLICATION_X_WWW_FORM_URLENCODED.contentEqualsIgnoreCase(mimeType.toString().trim());
}
@Override
public boolean isKeepAlive() {
return HttpUtil.isKeepAlive(nettyRequest);
}
@Override
public boolean isMultipart() {
return HttpPostRequestDecoder.isMultipart(nettyRequest);
}
@Override
public boolean isWebsocket() {
return get(channel()) instanceof WebsocketServerOperations;
}
final boolean isHttp2() {
return isHttp2;
}
@Override
public HttpServerResponse keepAlive(boolean keepAlive) {
HttpUtil.setKeepAlive(nettyResponse, keepAlive);
return this;
}
@Override
public HttpMethod method() {
return nettyRequest.method();
}
@Override
@Nullable
public String param(CharSequence key) {
Objects.requireNonNull(key, "key");
Map<String, String> params = null;
if (paramsResolver != null) {
params = this.paramsResolver.apply(uri());
}
return null != params ? params.get(key.toString()) : null;
}
@Override
@Nullable
public Map<String, String> params() {
return null != paramsResolver ? paramsResolver.apply(uri()) : null;
}
@Override
public HttpServerRequest paramsResolver(Function<? super String, Map<String, String>> paramsResolver) {
this.paramsResolver = paramsResolver;
return this;
}
@Override
public Flux<HttpData> receiveForm() {
return receiveFormInternal(formDecoderProvider);
}
@Override
public Flux<HttpData> receiveForm(Consumer<HttpServerFormDecoderProvider.Builder> formDecoderBuilder) {
Objects.requireNonNull(formDecoderBuilder, "formDecoderBuilder");
HttpServerFormDecoderProvider.Build builder = new HttpServerFormDecoderProvider.Build();
formDecoderBuilder.accept(builder);
HttpServerFormDecoderProvider config = builder.build();
return receiveFormInternal(config);
}
@Override
public Flux<?> receiveObject() {
// Handle the 'Expect: 100-continue' header if necessary.
// TODO: Respond with 413 Request Entity Too Large
// and discard the traffic or close the connection.
// No need to notify the upstream handlers - just log.
// If decoding a response, just throw an error.
if (HttpUtil.is100ContinueExpected(nettyRequest)) {
return FutureMono.deferFuture(() -> {
if (!hasSentHeaders()) {
return channel().writeAndFlush(CONTINUE);
}
return channel().newSucceededFuture();
})
.thenMany(super.receiveObject());
}
else {
return super.receiveObject();
}
}
@Override
@Nullable
public InetSocketAddress hostAddress() {
return this.connectionInfo.getHostAddress();
}
final SocketAddress hostSocketAddress() {
return this.connectionInfo.hostAddress;
}
@Override
@Nullable
public SocketAddress connectionHostAddress() {
return channel().localAddress();
}
@Override
@Nullable
public InetSocketAddress remoteAddress() {
return this.connectionInfo.getRemoteAddress();
}
final SocketAddress remoteSocketAddress() {
return this.connectionInfo.remoteAddress;
}
@Override
@Nullable
public SocketAddress connectionRemoteAddress() {
return channel().remoteAddress();
}
@Override
public HttpHeaders requestHeaders() {
if (nettyRequest != null) {
return nettyRequest.headers();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String scheme() {
return this.connectionInfo.getScheme();
}
@Override
public String connectionScheme() {
return scheme;
}
@Override
public String hostName() {
return connectionInfo.getHostName();
}
@Override
public int hostPort() {
return connectionInfo.getHostPort();
}
@Override
public HttpHeaders responseHeaders() {
return responseHeaders;
}
@Override
public String protocol() {
return nettyRequest.protocolVersion().text();
}
@Override
public ZonedDateTime timestamp() {
return timestamp;
}
@Override
public Mono<Void> send() {
return FutureMono.deferFuture(() -> markSentHeaderAndBody() ?
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER)) :
channel().newSucceededFuture());
}
@Override
public NettyOutbound sendFile(Path file) {
try {
return sendFile(file, 0L, Files.size(file));
}
catch (IOException e) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Path not resolved"), e);
}
return then(sendNotFound());
}
}
@Override
public Mono<Void> sendNotFound() {
return this.status(HttpResponseStatus.NOT_FOUND)
.send();
}
@Override
public Mono<Void> sendRedirect(String location) {
Objects.requireNonNull(location, "location");
return this.status(HttpResponseStatus.FOUND)
.header(HttpHeaderNames.LOCATION, location)
.send();
}
/**
* @return the Transfer setting SSE for this http connection (e.g. event-stream)
*/
@Override
public HttpServerResponse sse() {
header(HttpHeaderNames.CONTENT_TYPE, EVENT_STREAM);
return this;
}
@Override
public HttpResponseStatus status() {
return this.nettyResponse.status();
}
@Override
public HttpServerResponse status(HttpResponseStatus status) {
if (!hasSentHeaders()) {
this.nettyResponse.setStatus(status);
}
else {
throw new IllegalStateException("Status and headers already sent");
}
return this;
}
@Override
public HttpServerResponse trailerHeaders(Consumer<? super HttpHeaders> trailerHeaders) {
this.trailerHeadersConsumer = Objects.requireNonNull(trailerHeaders, "trailerHeaders");
return this;
}
@Override
public Mono<Void> sendWebsocket(
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler,
WebsocketServerSpec configurer) {
return withWebsocketSupport(uri(), configurer, websocketHandler);
}
@Override
public String uri() {
if (nettyRequest != null) {
return nettyRequest.uri();
}
throw new IllegalStateException("request not parsed");
}
@Override
public String fullPath() {
if (path != null) {
return path;
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpVersion version() {
if (nettyRequest != null) {
return nettyRequest.protocolVersion();
}
throw new IllegalStateException("request not parsed");
}
@Override
public HttpServerResponse compression(boolean compress) {
compressionPredicate = compress ? configuredCompressionPredicate : COMPRESSION_DISABLED;
if (!compress) {
removeHandler(NettyPipeline.CompressionHandler);
}
else if (channel().pipeline()
.get(NettyPipeline.CompressionHandler) == null) {
SimpleCompressionHandler handler = new SimpleCompressionHandler();
try {
//Do not invoke handler.channelRead as it will trigger ctx.fireChannelRead
handler.decode(channel().pipeline().context(NettyPipeline.ReactiveBridge), nettyRequest);
addHandlerFirst(NettyPipeline.CompressionHandler, handler);
}
catch (Throwable e) {
log.error(format(channel(), ""), e);
}
}
return this;
}
@Override
protected void onInboundNext(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
boolean isFullHttpRequest = msg instanceof FullHttpRequest;
if (!(isHttp2() && isFullHttpRequest)) {
if (readTimeout != null) {
addHandlerFirst(NettyPipeline.ReadTimeoutHandler,
new ReadTimeoutHandler(readTimeout.toMillis(), TimeUnit.MILLISECONDS));
}
if (requestTimeout != null) {
requestTimeoutFuture =
ctx.executor().schedule(new RequestTimeoutTask(ctx), Math.max(requestTimeout.toMillis(), 1), TimeUnit.MILLISECONDS);
}
}
try {
listener().onStateChange(this, HttpServerState.REQUEST_RECEIVED);
}
catch (Exception e) {
onInboundError(e);
ReferenceCountUtil.release(msg);
return;
}
if (isFullHttpRequest) {
FullHttpRequest request = (FullHttpRequest) msg;
if (request.content().readableBytes() > 0) {
super.onInboundNext(ctx, msg);
}
else {
request.release();
}
if (isHttp2()) {
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
return;
}
if (msg instanceof HttpContent) {
if (msg != LastHttpContent.EMPTY_LAST_CONTENT) {
super.onInboundNext(ctx, msg);
}
if (msg instanceof LastHttpContent) {
removeHandler(NettyPipeline.ReadTimeoutHandler);
if (requestTimeoutFuture != null) {
requestTimeoutFuture.cancel(false);
requestTimeoutFuture = null;
}
//force auto read to enable more accurate close selection now inbound is done
channel().config().setAutoRead(true);
onInboundComplete();
}
}
else {
super.onInboundNext(ctx, msg);
}
}
@Override
protected void onInboundClose() {
discardWhenNoReceiver();
if (!(isInboundCancelled() || isInboundDisposed())) {
onInboundError(new AbortedException("Connection has been closed"));
}
terminate();
}
@Override
protected void afterMarkSentHeaders() {
if (compressionPredicate != null && compressionPredicate.test(this, this)) {
compression(true);
}
}
@Override
protected void beforeMarkSentHeaders() {
//noop
}
@Override
protected boolean isContentAlwaysEmpty() {
int code = status().code();
if (HttpResponseStatus.NOT_MODIFIED.code() == code) {
responseHeaders.remove(HttpHeaderNames.TRANSFER_ENCODING)
.remove(HttpHeaderNames.CONTENT_LENGTH);
return true;
}
return HttpResponseStatus.NO_CONTENT.code() == code ||
HttpResponseStatus.RESET_CONTENT.code() == code;
}
@Override
protected void onHeadersSent() {
//noop
}
@Override
protected void onOutboundComplete() {
if (isWebsocket()) {
return;
}
final ChannelFuture f;
if (log.isDebugEnabled()) {
log.debug(format(channel(), "Last HTTP response frame"));
}
if (markSentHeaderAndBody()) {
if (log.isDebugEnabled()) {
log.debug(format(channel(), "No sendHeaders() called before complete, sending " +
"zero-length header"));
}
f = channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER));
}
else if (markSentBody()) {
HttpHeaders trailerHeaders = null;
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A trailer allows the sender to include additional fields at the end
// of a chunked message in order to supply metadata that might be
// dynamically generated while the message body is sent, such as a
// message integrity check, digital signature, or post-processing
// status.
if (trailerHeadersConsumer != null && isTransferEncodingChunked(nettyResponse)) {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.4
// When a message includes a message body encoded with the chunked
// transfer coding and the sender desires to send metadata in the form
// of trailer fields at the end of the message, the sender SHOULD
// generate a Trailer header field before the message body to indicate
// which fields will be present in the trailers.
String declaredHeaderNames = responseHeaders.get(HttpHeaderNames.TRAILER);
if (declaredHeaderNames != null) {
trailerHeaders = new TrailerHeaders(declaredHeaderNames);
try {
trailerHeadersConsumer.accept(trailerHeaders);
}
catch (IllegalArgumentException e) {
// A sender MUST NOT generate a trailer when header names are
// HttpServerOperations.TrailerHeaders.DISALLOWED_TRAILER_HEADER_NAMES
log.error(format(channel(), "Cannot apply trailer headers [{}]"), declaredHeaderNames, e);
}
}
}
f = channel().writeAndFlush(trailerHeaders != null && !trailerHeaders.isEmpty() ?
new DefaultLastHttpContent(Unpooled.buffer(0), trailerHeaders) :
LastHttpContent.EMPTY_LAST_CONTENT);
}
else {
discard();
return;
}
f.addListener(s -> {
discard();
if (!s.isSuccess() && log.isDebugEnabled()) {
log.debug(format(channel(), "Failed flushing last frame"), s.cause());
}
});
}
static void cleanHandlerTerminate(Channel ch) {
ChannelOperations<?, ?> ops = get(ch);
if (ops == null) {
return;
}
ops.discard();
//Try to defer the disposing to leave a chance for any synchronous complete following this callback
if (!ops.isSubscriptionDisposed()) {
ch.eventLoop()
.execute(((HttpServerOperations) ops)::terminate);
}
else {
//if already disposed, we can immediately call terminate
((HttpServerOperations) ops).terminate();
}
}
static long requestsCounter(Channel channel) {
HttpServerOperations ops = Connection.from(channel).as(HttpServerOperations.class);
if (ops == null) {
return -1;
}
return ((AtomicLong) ops.connection()).get();
}
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
sendDecodingFailures(ctx, listener, secure, t, msg, httpMessageLogFactory, false, timestamp, connectionInfo, remoteAddress);
}
@SuppressWarnings("FutureReturnValueIgnored")
static void sendDecodingFailures(
ChannelHandlerContext ctx,
ConnectionObserver listener,
boolean secure,
Throwable t,
Object msg,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
@Nullable ZonedDateTime timestamp,
@Nullable ConnectionInfo connectionInfo,
SocketAddress remoteAddress) {
Throwable cause = t.getCause() != null ? t.getCause() : t;
if (log.isWarnEnabled()) {
log.warn(format(ctx.channel(), "Decoding failed: {}"),
msg instanceof HttpObject ?
httpMessageLogFactory.warn(HttpMessageArgProviderFactory.create(msg)) : msg);
}
ReferenceCountUtil.release(msg);
final HttpResponseStatus status;
if (cause instanceof TooLongHttpLineException) {
status = HttpResponseStatus.REQUEST_URI_TOO_LONG;
}
else if (cause instanceof TooLongHttpHeaderException) {
status = HttpResponseStatus.REQUEST_HEADER_FIELDS_TOO_LARGE;
}
else {
status = HttpResponseStatus.BAD_REQUEST;
}
HttpResponse response = new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, status);
response.headers()
.setInt(HttpHeaderNames.CONTENT_LENGTH, 0)
.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
Connection ops = ChannelOperations.get(ctx.channel());
if (ops == null) {
Connection conn = Connection.from(ctx.channel());
if (msg instanceof HttpRequest) {
ops = new FailedHttpServerRequest(conn, listener, (HttpRequest) msg, response, httpMessageLogFactory, isHttp2,
secure, timestamp == null ? ZonedDateTime.now(ReactorNetty.ZONE_ID_SYSTEM) : timestamp,
connectionInfo == null ? new ConnectionInfo(ctx.channel().localAddress(), remoteAddress, secure) : connectionInfo);
ops.bind();
}
else {
ops = conn;
}
}
//"FutureReturnValueIgnored" this is deliberate
ctx.channel().writeAndFlush(response);
listener.onStateChange(ops, REQUEST_DECODING_FAILED);
}
/**
* There is no need of invoking {@link #discard()}, the inbound will
* be canceled on channel inactive event if there is no subscriber available
*
* @param err the {@link Throwable} cause
*/
@Override
protected void onOutboundError(Throwable err) {
if (!channel().isActive()) {
super.onOutboundError(err);
return;
}
if (markSentHeaders()) {
log.error(format(channel(), "Error starting response. Replying error status"), err);
nettyResponse.setStatus(HttpResponseStatus.INTERNAL_SERVER_ERROR);
responseHeaders.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
channel().writeAndFlush(newFullBodyMessage(EMPTY_BUFFER))
.addListener(ChannelFutureListener.CLOSE);
return;
}
markSentBody();
log.error(format(channel(), "Error finishing response. Closing connection"), err);
channel().writeAndFlush(EMPTY_BUFFER)
.addListener(ChannelFutureListener.CLOSE);
}
@Override
protected HttpMessage outboundHttpMessage() {
return nettyResponse;
}
final Flux<HttpData> receiveFormInternal(HttpServerFormDecoderProvider config) {
boolean isMultipart = isMultipart();
if (!Objects.equals(method(), HttpMethod.POST) || !(isFormUrlencoded() || isMultipart)) {
return Flux.error(new IllegalStateException(
"Request is not POST or does not have Content-Type " +
"with value 'application/x-www-form-urlencoded' or 'multipart/form-data'"));
}
return Flux.defer(() ->
config.newHttpPostRequestDecoder(nettyRequest, isMultipart).flatMapMany(decoder ->
receiveObject() // receiveContent uses filter operator, this operator buffers, but we don't want it
.concatMap(object -> {
if (!(object instanceof HttpContent)) {
return Mono.empty();
}
HttpContent httpContent = (HttpContent) object;
if (config.maxInMemorySize > -1) {
httpContent.retain();
}
return config.maxInMemorySize == -1 ?
Flux.using(
() -> decoder.offer(httpContent),
d -> Flux.fromIterable(decoder.currentHttpData(!config.streaming)),
d -> decoder.cleanCurrentHttpData(!config.streaming)) :
Flux.usingWhen(
Mono.fromCallable(() -> decoder.offer(httpContent))
.subscribeOn(config.scheduler)
.doFinally(sig -> httpContent.release()),
d -> Flux.fromIterable(decoder.currentHttpData(true)),
// FIXME Can we have cancellation for the resourceSupplier that will
// cause this one to not be invoked?
d -> Mono.fromRunnable(() -> decoder.cleanCurrentHttpData(true)));
}, 0) // There is no need of prefetch, we already have the buffers in the Reactor Netty inbound queue
.doFinally(sig -> decoder.destroy())));
}
final Mono<Void> withWebsocketSupport(String url,
WebsocketServerSpec websocketServerSpec,
BiFunction<? super WebsocketInbound, ? super WebsocketOutbound, ? extends Publisher<Void>> websocketHandler) {
Objects.requireNonNull(websocketServerSpec, "websocketServerSpec");
Objects.requireNonNull(websocketHandler, "websocketHandler");
if (markSentHeaders()) {
WebsocketServerOperations ops = new WebsocketServerOperations(url, websocketServerSpec, this);
return FutureMono.from(ops.handshakerResult)
.doOnEach(signal -> {
if (!signal.hasError() && (websocketServerSpec.protocols() == null || ops.selectedSubprotocol() != null)) {
websocketHandler.apply(ops, ops)
.subscribe(new WebsocketSubscriber(ops, Context.of(signal.getContextView())));
}
});
}
else {
log.error(format(channel(), "Cannot enable websocket if headers have already been sent"));
}
return Mono.error(new IllegalStateException("Failed to upgrade to websocket"));
}
static final class WebsocketSubscriber implements CoreSubscriber<Void>, ChannelFutureListener {
final WebsocketServerOperations ops;
final Context context;
WebsocketSubscriber(WebsocketServerOperations ops, Context context) {
this.ops = ops;
this.context = context;
}
@Override
public void onSubscribe(Subscription s) {
s.request(Long.MAX_VALUE);
}
@Override
public void onNext(Void aVoid) {
}
@Override
public void onError(Throwable t) {
ops.onError(t);
}
@Override
public void operationComplete(ChannelFuture future) {
ops.terminate();
}
@Override
public void onComplete() {
if (ops.channel()
.isActive()) {
ops.sendCloseNow(new CloseWebSocketFrame(WebSocketCloseStatus.NORMAL_CLOSURE), this);
}
}
@Override
public Context currentContext() {
return context;
}
}
static final Logger log = Loggers.getLogger(HttpServerOperations.class);
final static AsciiString EVENT_STREAM = new AsciiString("text/event-stream");
static final BiPredicate<HttpServerRequest, HttpServerResponse> COMPRESSION_DISABLED = (req, res) -> false;
final static FullHttpResponse CONTINUE =
new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.CONTINUE,
EMPTY_BUFFER);
static final class FailedHttpServerRequest extends HttpServerOperations {
final HttpResponse customResponse;
FailedHttpServerRequest(
Connection c,
ConnectionObserver listener,
HttpRequest nettyRequest,
HttpResponse nettyResponse,
HttpMessageLogFactory httpMessageLogFactory,
boolean isHttp2,
boolean secure,
ZonedDateTime timestamp,
ConnectionInfo connectionInfo) {
super(c, listener, nettyRequest, null, connectionInfo,
ServerCookieDecoder.STRICT, ServerCookieEncoder.STRICT, DEFAULT_FORM_DECODER_SPEC, httpMessageLogFactory, isHttp2,
null, null, null, false, secure, timestamp);
this.customResponse = nettyResponse;
String tempPath = "";
try {
tempPath = resolvePath(nettyRequest.uri());
}
catch (RuntimeException e) {
tempPath = "/bad-request";
}
finally {
this.path = tempPath;
}
}
@Override
protected HttpMessage outboundHttpMessage() {
return customResponse;
}
@Override
public HttpResponseStatus status() {
return customResponse.status();
}
}
final class RequestTimeoutTask implements Runnable {
final ChannelHandlerContext ctx;
RequestTimeoutTask(ChannelHandlerContext ctx) {
this.ctx = ctx;
}
@Override
@SuppressWarnings("FutureReturnValueIgnored")
public void run() {
if (ctx.channel().isActive() && !(isInboundCancelled() || isInboundDisposed())) {
onInboundError(RequestTimeoutException.INSTANCE);
//"FutureReturnValueIgnored" this is deliberate
ctx.close();
}
}
}
static final class TrailerHeaders extends DefaultHttpHeaders {
static final Set<String> DISALLOWED_TRAILER_HEADER_NAMES = new HashSet<>(14);
static {
// https://datatracker.ietf.org/doc/html/rfc7230#section-4.1.2
// A sender MUST NOT generate a trailer that contains a field necessary
// for message framing (e.g., Transfer-Encoding and Content-Length),
// routing (e.g., Host), request modifiers (e.g., controls and
// conditionals in Section 5 of [RFC7231]), authentication (e.g., see
// [RFC7235] and [RFC6265]), response control data (e.g., see Section
// 7.1 of [RFC7231]), or determining how to process the payload (e.g.,
// Content-Encoding, Content-Type, Content-Range, and Trailer).
DISALLOWED_TRAILER_HEADER_NAMES.add("age");
DISALLOWED_TRAILER_HEADER_NAMES.add("cache-control");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-length");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-range");
DISALLOWED_TRAILER_HEADER_NAMES.add("content-type");
DISALLOWED_TRAILER_HEADER_NAMES.add("date");
DISALLOWED_TRAILER_HEADER_NAMES.add("expires");
DISALLOWED_TRAILER_HEADER_NAMES.add("location");
DISALLOWED_TRAILER_HEADER_NAMES.add("retry-after");
DISALLOWED_TRAILER_HEADER_NAMES.add("trailer");
DISALLOWED_TRAILER_HEADER_NAMES.add("transfer-encoding");
DISALLOWED_TRAILER_HEADER_NAMES.add("vary");
DISALLOWED_TRAILER_HEADER_NAMES.add("warning");
}
TrailerHeaders(String declaredHeaderNames) {
super(true, new TrailerNameValidator(filterHeaderNames(declaredHeaderNames)));
}
static Set<String> filterHeaderNames(String declaredHeaderNames) {
Objects.requireNonNull(declaredHeaderNames, "declaredHeaderNames");
Set<String> result = new HashSet<>();
String[] names = declaredHeaderNames.split(",", -1);
for (String name : names) {
String trimmedStr = name.trim();
if (trimmedStr.isEmpty() ||
DISALLOWED_TRAILER_HEADER_NAMES.contains(trimmedStr.toLowerCase(Locale.ENGLISH))) {
continue;
}
result.add(trimmedStr);
}
return result;
}
static final class TrailerNameValidator implements DefaultHeaders.NameValidator<CharSequence> {
/**
* Contains the headers names specified with {@link HttpHeaderNames#TRAILER}
*/
final Set<String> declaredHeaderNames;
TrailerNameValidator(Set<String> declaredHeaderNames) {
this.declaredHeaderNames = declaredHeaderNames;
}
@Override
public void validateName(CharSequence name) {
if (!declaredHeaderNames.contains(name.toString())) {
throw new IllegalArgumentException("Trailer header name [" + name +
"] not declared with [Trailer] header, or it is not a valid trailer header name");
}
}
}
}
}
| violetagg | cc1d8e82d5fe578f1144f5aceb62a6554bbd5be2 | 70f5161fc5245774ac5d3491026af95952a72325 | `FullHttpRequest` is when we receive `Http2HeadersFrame` with `endStream=true`.
I'll fix it. Thanks. | violetagg | 11 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for Brotli compression
testImplementation "com.aayushatharva.brotli4j:brotli4j:$brotli4jVersion"
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.os}-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.classifier}:$brotli4jVersion"
}
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | Instead of importing the natives for all OSes, can we do something like this?
```suggestion
if (osdetector.classifier in ["linux-aarch_64"] || ["osx-aarch_64"]) {
testRuntimeOnly "com.aayushatharva.brotli4j:native-$osdetector.os-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-$osdetector.classifier:$brotli4jVersion"
}
``` | violetagg | 12 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for Brotli compression
testImplementation "com.aayushatharva.brotli4j:brotli4j:$brotli4jVersion"
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.os}-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.classifier}:$brotli4jVersion"
}
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | I think we need this change:
```suggestion
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
``` | violetagg | 13 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for Brotli compression
testImplementation "com.aayushatharva.brotli4j:brotli4j:$brotli4jVersion"
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.os}-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.classifier}:$brotli4jVersion"
}
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | the classifiers for the brotli natives are interesting decision | violetagg | 14 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for Brotli compression
testImplementation "com.aayushatharva.brotli4j:brotli4j:$brotli4jVersion"
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.os}-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.classifier}:$brotli4jVersion"
}
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | What about Armv7? | hyperxpro | 15 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for Brotli compression
testImplementation "com.aayushatharva.brotli4j:brotli4j:$brotli4jVersion"
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.os}-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.classifier}:$brotli4jVersion"
}
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | @hyperxpro This dependency is just for tests. What's the classifier for Armv7? | violetagg | 16 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for Brotli compression
testImplementation "com.aayushatharva.brotli4j:brotli4j:$brotli4jVersion"
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.os}-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.classifier}:$brotli4jVersion"
}
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | `native-linux-armv7`
If it's just for test then it should be fine but I'd recommend adding just in case someone compiles and tests it on Armv7. | hyperxpro | 17 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/build.gradle | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | /*
* Copyright (c) 2020-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import me.champeau.gradle.japicmp.JapicmpTask
apply plugin: 'io.spring.javadoc'
apply plugin: 'me.champeau.gradle.japicmp'
apply plugin: 'de.undercouch.download'
apply plugin: 'biz.aQute.bnd.builder'
ext {
bndOptions = [
"Export-Package" : "reactor.netty.http*;version=$osgiVersion;-noimport:=true",
"Import-Package": [
"!javax.annotation",
"io.netty.channel.kqueue;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.handler.codec.haproxy;resolution:=optional;version=\"[4.1,5)\"",
"io.netty.incubator.channel.uring;resolution:=optional",
"io.micrometer.*;resolution:=optional",
"*"
].join(","),
"Bundle-Name" : "reactor-netty-http",
"Bundle-SymbolicName" : "io.projectreactor.netty.reactor-netty-http",
"Bundle-Version" : "$osgiVersion"
]
}
sourceSets {
noMicrometerTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
contextPropagationTest {
compileClasspath += sourceSets.main.output
runtimeClasspath += sourceSets.main.output
}
}
configurations {
noMicrometerTestImplementation {
extendsFrom implementation
exclude group: 'io.micrometer'
}
noMicrometerTestRuntimeOnly.extendsFrom(runtimeOnly)
contextPropagationTestImplementation.extendsFrom(implementation)
contextPropagationTestRuntimeOnly.extendsFrom(runtimeOnly)
}
dependencies {
api project(path: ':reactor-netty-core', configuration: 'shadow')
// JSR-305 annotations
compileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
api "io.netty:netty-codec-http:$nettyVersion"
api "io.netty:netty-codec-http2:$nettyVersion"
api "io.netty:netty-resolver-dns:$nettyVersion"
// MacOS binaries are not available for Netty SNAPSHOT version
if (!"$nettyVersion".endsWithAny("SNAPSHOT")) {
if (osdetector.classifier == "osx-x86_64" || osdetector.classifier == "osx-aarch_64") {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion$os_suffix"
}
else {
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion:osx-x86_64"
}
}
else {
// MacOS binaries are not available for Netty SNAPSHOT version
api "io.netty:netty-resolver-dns-native-macos:$nettyVersion"
}
compileOnly "io.netty:netty-codec-haproxy:$nettyVersion"
//transport resolution: typical build forces epoll but not kqueue transitively
//on the other hand, if we want to make transport-specific tests, we'll make all
// native optional at compile time and add correct native/nio to testRuntime
if (project.hasProperty("forceTransport")) {
//so that the main code compiles
compileOnly "io.netty:netty-transport-native-epoll:$nettyVersion"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
//now we explicitly add correctly qualified native, or do nothing if we want to test NIO
if (forceTransport == "native") {
if (osdetector.os == "osx") {
testRuntimeOnly "io.netty:netty-transport-native-kqueue:$nettyVersion$os_suffix"
}
else if (osdetector.os == "linux") {
testRuntimeOnly "io.netty:netty-transport-native-epoll:$nettyVersion$os_suffix"
}
}
else if (forceTransport == "io_uring" && osdetector.os == "linux") {
testRuntimeOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion$os_suffix"
}
else if (forceTransport != "nio") {
throw new InvalidUserDataException("invalid -PforceTranport option " + forceTransport + ", should be native|nio")
}
}
else {
//classic build to be distributed
api "io.netty:netty-transport-native-epoll:$nettyVersion:linux-x86_64"
compileOnly "io.netty:netty-transport-native-kqueue:$nettyVersion"
compileOnly "io.netty.incubator:netty-incubator-transport-native-io_uring:$nettyIoUringVersion"
}
//Metrics
compileOnly "io.micrometer:micrometer-core:$micrometerVersion"
compileOnly "io.micrometer:micrometer-tracing:$micrometerTracingVersion"
// Logging
compileOnly "org.slf4j:slf4j-api:$slf4jVersion"
api "io.projectreactor:reactor-core:$reactorCoreVersion"
testImplementation(testFixtures(project(':reactor-netty-core'))) {
exclude module: "reactor-netty-core"
}
// Testing
// JSR-305 annotations
testCompileOnly "com.google.code.findbugs:jsr305:$jsr305Version"
testImplementation "org.mockito:mockito-core:$mockitoVersion"
testImplementation "io.specto:hoverfly-java-junit5:$hoverflyJavaVersion"
testImplementation "org.apache.tomcat.embed:tomcat-embed-core:$tomcatVersion"
testImplementation "io.projectreactor:reactor-test:$testAddonVersion"
testImplementation "org.assertj:assertj-core:$assertJVersion"
testImplementation "org.awaitility:awaitility:$awaitilityVersion"
testImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
testImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
testImplementation "io.projectreactor.tools:blockhound-junit-platform:$blockHoundVersion"
testImplementation "io.micrometer:micrometer-core:$micrometerVersion"
testImplementation "io.micrometer:micrometer-test:$micrometerVersion"
testImplementation("io.micrometer:micrometer-tracing-integration-test:$micrometerTracingVersion") {
exclude module: "context-propagation"
}
testImplementation "org.reflections:reflections:$reflectionsVersion"
testRuntimeOnly "org.junit.platform:junit-platform-launcher:$junitPlatformLauncherVersion"
testRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
testRuntimeOnly "org.slf4j:jcl-over-slf4j:$slf4jVersion"
testRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
// Needed for Brotli compression
testImplementation "com.aayushatharva.brotli4j:brotli4j:$brotli4jVersion"
if (osdetector.classifier == "linux-aarch_64" || osdetector.classifier == "osx-aarch_64") {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.os}-aarch64:$brotli4jVersion"
}
else {
testRuntimeOnly "com.aayushatharva.brotli4j:native-${osdetector.classifier}:$brotli4jVersion"
}
// Needed for proxy testing
testRuntimeOnly "io.netty:netty-handler-proxy:$nettyVersion"
testRuntimeOnly "io.netty:netty-codec-haproxy:$nettyVersion"
// Needed for HTTP/2 testing
testRuntimeOnly "io.netty:netty-tcnative-boringssl-static:$boringSslVersion$os_suffix"
// noMicrometerTest sourceSet (must not include Micrometer)
noMicrometerTestImplementation "org.assertj:assertj-core:$assertJVersion"
noMicrometerTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
noMicrometerTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
noMicrometerTestRuntimeOnly "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.projectreactor:reactor-test:$testAddonVersion"
contextPropagationTestImplementation "org.assertj:assertj-core:$assertJVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-api:$junitVersion"
contextPropagationTestImplementation "org.junit.jupiter:junit-jupiter-params:$junitVersion"
contextPropagationTestImplementation "ch.qos.logback:logback-classic:$logbackVersion"
contextPropagationTestImplementation "io.micrometer:context-propagation:$contextPropagationVersion"
contextPropagationTestRuntimeOnly "org.junit.jupiter:junit-jupiter-engine:$junitVersion"
}
jar {
manifest {
attributes("Automatic-Module-Name": "reactor.netty.http")
}
bnd(bndOptions)
}
task downloadBaseline(type: Download) {
onlyIf {
if (project.gradle.startParameter.isOffline()) {
println "Offline: skipping downloading of baseline and JAPICMP"
return false
}
else if ("$compatibleVersion" == "SKIP") {
println "SKIP: Instructed to skip the baseline comparison"
return false
}
else {
println "Will download and perform baseline comparison with ${compatibleVersion}"
return true
}
}
onlyIfNewer true
compress true
src "${repositories.mavenCentral().url}io/projectreactor/netty/reactor-netty-http/$compatibleVersion/reactor-netty-http-${compatibleVersion}.jar"
dest "${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"
}
def japicmpReport = tasks.register('japicmpReport') {
onlyIf {
japicmp.state.failure != null
}
doLast {
def reportFile = file("${project.buildDir}/reports/japi.txt")
if (reportFile.exists()) {
println "\n **********************************"
println " * /!\\ API compatibility failures *"
println " **********************************"
println "Japicmp report was filtered and interpreted to find the following incompatibilities:"
reportFile.eachLine {
if (it.contains("*") && (!it.contains("***") || it.contains("****")))
println "source incompatible change: $it"
else if (it.contains("!"))
println "binary incompatible change: $it"
}
}
else println "No incompatible change to report"
}
}
task japicmp(type: JapicmpTask) {
finalizedBy(japicmpReport)
onlyIf { "$compatibleVersion" != "SKIP" }
oldClasspath.from(files("${buildDir}/baselineLibs/reactor-netty-http-${compatibleVersion}.jar"))
newClasspath.from(files(jar.archiveFile))
// these onlyXxx parameters result in a report that is slightly too noisy, but better than
// onlyBinaryIncompatibleModified = true which masks source-incompatible-only changes
onlyBinaryIncompatibleModified = false
onlyModified = true
failOnModification = true
failOnSourceIncompatibility = true
txtOutputFile = file("${project.buildDir}/reports/japi.txt")
ignoreMissingClasses = true
includeSynthetic = true
compatibilityChangeExcludes = [ "METHOD_NEW_DEFAULT" ]
methodExcludes = [
]
}
tasks.japicmp.dependsOn(downloadBaseline)
tasks.check.dependsOn(japicmp)
task noMicrometerTest(type: Test) {
testClassesDirs = sourceSets.noMicrometerTest.output.classesDirs
classpath = sourceSets.noMicrometerTest.runtimeClasspath
}
tasks.check.dependsOn(noMicrometerTest)
task contextPropagationTest(type: Test) {
testClassesDirs = sourceSets.contextPropagationTest.output.classesDirs
classpath = sourceSets.contextPropagationTest.runtimeClasspath
}
tasks.check.dependsOn(contextPropagationTest)
description = "HTTP functionality for the Reactor Netty library" | sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | let's skip it for the moment, we can add this later if needed | violetagg | 18 |
reactor/reactor-netty | 2,815 | Add `Brotli` compression test | Note: Netty 4.x supports Brotli compression. Brotli compression is available if and only if the Brotli4j library is on the runtime classpath.
| null | 2023-05-26 18:24:25+00:00 | 2023-05-31 09:22:27+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/server/SimpleCompressionHandler.java | /*
* Copyright (c) 2018-2021 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.http.DefaultHttpContent;
import io.netty.handler.codec.http.DefaultHttpRequest;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.HttpContentCompressor;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.util.ReferenceCountUtil;
import java.util.ArrayList;
import java.util.List;
/**
* @author Stephane Maldini
*/
final class SimpleCompressionHandler extends HttpContentCompressor {
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
throws Exception {
if (msg instanceof ByteBuf) {
super.write(ctx, new DefaultHttpContent((ByteBuf) msg), promise);
}
else {
super.write(ctx, msg, promise);
}
}
void decode(ChannelHandlerContext ctx, HttpRequest msg) {
List<Object> out = new ArrayList<>();
HttpRequest request = msg;
try {
if (msg instanceof FullHttpRequest && ((FullHttpRequest) msg).content().refCnt() == 0) {
// This can happen only in HTTP2 use case and delayed response
// When the incoming FullHttpRequest content is with 0 readableBytes it is released immediately
// decode(...) will observe a freed content
request = new DefaultHttpRequest(msg.protocolVersion(), msg.method(), msg.uri(), msg.headers());
}
super.decode(ctx, request, out);
}
catch (DecoderException e) {
throw e;
}
catch (Exception e) {
throw new DecoderException(e);
}
finally {
// ReferenceCountUtil.retain(...) is invoked in decode(...) so release(...) here
ReferenceCountUtil.release(request);
out.clear();
}
}
}
| /*
* Copyright (c) 2018-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.server;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelPromise;
import io.netty.handler.codec.DecoderException;
import io.netty.handler.codec.compression.CompressionOptions;
import io.netty.handler.codec.http.DefaultHttpContent;
import io.netty.handler.codec.http.DefaultHttpRequest;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.HttpContentCompressor;
import io.netty.handler.codec.http.HttpRequest;
import io.netty.util.ReferenceCountUtil;
import java.util.ArrayList;
import java.util.List;
/**
* @author Stephane Maldini
*/
final class SimpleCompressionHandler extends HttpContentCompressor {
SimpleCompressionHandler() {
super((CompressionOptions[]) null);
}
@Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise)
throws Exception {
if (msg instanceof ByteBuf) {
super.write(ctx, new DefaultHttpContent((ByteBuf) msg), promise);
}
else {
super.write(ctx, msg, promise);
}
}
void decode(ChannelHandlerContext ctx, HttpRequest msg) {
List<Object> out = new ArrayList<>();
HttpRequest request = msg;
try {
if (msg instanceof FullHttpRequest && ((FullHttpRequest) msg).content().refCnt() == 0) {
// This can happen only in HTTP2 use case and delayed response
// When the incoming FullHttpRequest content is with 0 readableBytes it is released immediately
// decode(...) will observe a freed content
request = new DefaultHttpRequest(msg.protocolVersion(), msg.method(), msg.uri(), msg.headers());
}
super.decode(ctx, request, out);
}
catch (DecoderException e) {
throw e;
}
catch (Exception e) {
throw new DecoderException(e);
}
finally {
// ReferenceCountUtil.retain(...) is invoked in decode(...) so release(...) here
ReferenceCountUtil.release(request);
out.clear();
}
}
}
| sullis | 8e535c269944cfae75a8b4ba3de1066aa4eb3202 | 8f0e73669722239585db02cbd4c1d59fce5c3fd5 | ```suggestion
SimpleCompressionHandler() {
``` | violetagg | 19 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **ByteBufFlux.fromInbound** to return a **ByteBufFlux**, and a call and encapsulation of ByteBufFlux.fromInbound has also been made in the responseContent method, This is actually unnecessary, so you can directly refer to the internal implementation of ChannelOperations::receive in the responseContent method to generate ByteBufFlux, which will result in better performance and clearer logic. Similarly, there are similar issues with **WebsocketFinalizer:responseContent**, which is why this PR. | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Publisher<ByteBuf>> contentReceiver = ChannelOperations::receive;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Flux<?>> contentReceiver = ChannelOperations::receiveObject;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | @manzhizhen @pderop I'm wondering why we don't want to use this constant
Of course it needs modification something like this ...
`static final Function<ChannelOperations<?, ?>, Publisher<?>> contentReceiver = ChannelOperations::receiveObject;` | violetagg | 20 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **ByteBufFlux.fromInbound** to return a **ByteBufFlux**, and a call and encapsulation of ByteBufFlux.fromInbound has also been made in the responseContent method, This is actually unnecessary, so you can directly refer to the internal implementation of ChannelOperations::receive in the responseContent method to generate ByteBufFlux, which will result in better performance and clearer logic. Similarly, there are similar issues with **WebsocketFinalizer:responseContent**, which is why this PR. | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Publisher<ByteBuf>> contentReceiver = ChannelOperations::receive;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Flux<?>> contentReceiver = ChannelOperations::receiveObject;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | I do agree, indeed, if we keep the constant (but change it to ChannelOperations::receiveObject), then the impact of the patch will be reduced and the patch will be centralized in one single place.
| pderop | 21 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **ByteBufFlux.fromInbound** to return a **ByteBufFlux**, and a call and encapsulation of ByteBufFlux.fromInbound has also been made in the responseContent method, This is actually unnecessary, so you can directly refer to the internal implementation of ChannelOperations::receive in the responseContent method to generate ByteBufFlux, which will result in better performance and clearer logic. Similarly, there are similar issues with **WebsocketFinalizer:responseContent**, which is why this PR. | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Publisher<ByteBuf>> contentReceiver = ChannelOperations::receive;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Flux<?>> contentReceiver = ChannelOperations::receiveObject;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | The reason why I deleted this constant is because I found that there are no other references to it. Do I need to restore it? Or change the constant to ChannelOperations::receiveObject type? | manzhizhen | 22 |
reactor/reactor-netty | 2,792 | `responseContent()` remove excess `ByteBufFlux` conversion | When I looked at the implementation of the class **HttpClientFinalizer** code, I found that the logic of the **responseContent** method was a bit strange. HttpClientFinalizer.contentReceiver represents **ChannelOperations::receive**, while ChannelOperations::receive internal implementation has already made a call to **ByteBufFlux.fromInbound** to return a **ByteBufFlux**, and a call and encapsulation of ByteBufFlux.fromInbound has also been made in the responseContent method, This is actually unnecessary, so you can directly refer to the internal implementation of ChannelOperations::receive in the responseContent method to generate ByteBufFlux, which will result in better performance and clearer logic. Similarly, there are similar issues with **WebsocketFinalizer:responseContent**, which is why this PR. | null | 2023-05-03 03:50:45+00:00 | 2023-05-05 06:23:08+00:00 | reactor-netty-http/src/main/java/reactor/netty/http/client/HttpClientFinalizer.java | /*
* Copyright (c) 2017-2022 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Publisher<ByteBuf>> contentReceiver = ChannelOperations::receive;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| /*
* Copyright (c) 2017-2023 VMware, Inc. or its affiliates, All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package reactor.netty.http.client;
import java.net.URI;
import java.util.Objects;
import java.util.function.BiConsumer;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import java.util.function.Function;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.channel.ChannelOption;
import org.reactivestreams.Publisher;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import reactor.netty.ByteBufFlux;
import reactor.netty.ByteBufMono;
import reactor.netty.Connection;
import reactor.netty.NettyOutbound;
import reactor.netty.channel.ChannelOperations;
import reactor.util.annotation.Nullable;
/**
* Configures the HTTP request before calling one of the terminal,
* {@link Publisher} based, {@link ResponseReceiver} API.
*
* @author Stephane Maldini
* @author Violeta Georgieva
*/
final class HttpClientFinalizer extends HttpClientConnect implements HttpClient.RequestSender {
HttpClientFinalizer(HttpClientConfig config) {
super(config);
}
// UriConfiguration methods
@Override
public HttpClient.RequestSender uri(Mono<String> uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().deferredConf(config -> uri.map(s -> {
config.uriStr = s;
config.uri = null;
return config;
}));
return (HttpClientFinalizer) dup;
}
@Override
public HttpClient.RequestSender uri(String uri) {
Objects.requireNonNull(uri, "uri");
HttpClient dup = duplicate();
dup.configuration().uriStr = uri;
dup.configuration().uri = null;
return (HttpClientFinalizer) dup;
}
@Override
public RequestSender uri(URI uri) {
Objects.requireNonNull(uri, "uri");
if (!uri.isAbsolute()) {
throw new IllegalArgumentException("URI is not absolute: " + uri);
}
HttpClient dup = duplicate();
dup.configuration().uriStr = null;
dup.configuration().uri = uri;
return (HttpClientFinalizer) dup;
}
// ResponseReceiver methods
@Override
public Mono<HttpClientResponse> response() {
return _connect().map(RESPONSE_ONLY);
}
@Override
public <V> Flux<V> response(BiFunction<? super HttpClientResponse, ? super ByteBufFlux, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp.receive()))
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
@Override
public <V> Flux<V> responseConnection(BiFunction<? super HttpClientResponse, ? super Connection, ? extends Publisher<V>> receiver) {
return _connect().flatMapMany(resp -> Flux.from(receiver.apply(resp, resp))
.contextWrite(resp.currentContextView()));
}
@Override
public ByteBufFlux responseContent() {
ByteBufAllocator alloc = (ByteBufAllocator) configuration().options()
.get(ChannelOption.ALLOCATOR);
if (alloc == null) {
alloc = ByteBufAllocator.DEFAULT;
}
@SuppressWarnings("unchecked")
Mono<ChannelOperations<?, ?>> connector = (Mono<ChannelOperations<?, ?>>) connect();
return ByteBufFlux.fromInbound(connector.flatMapMany(contentReceiver), alloc);
}
@Override
public <V> Mono<V> responseSingle(BiFunction<? super HttpClientResponse, ? super ByteBufMono, ? extends Mono<V>> receiver) {
return _connect().flatMap(resp -> receiver.apply(resp, resp.receive().aggregate())
.doFinally(s -> discard(resp))
.contextWrite(resp.currentContextView()));
}
// RequestSender methods
@Override
public HttpClientFinalizer send(
BiFunction<? super HttpClientRequest, ? super NettyOutbound, ? extends Publisher<Void>> sender) {
Objects.requireNonNull(sender, "requestBody");
HttpClient dup = duplicate();
dup.configuration().body = sender;
return (HttpClientFinalizer) dup;
}
@Override
public HttpClientFinalizer send(Publisher<? extends ByteBuf> requestBody) {
Objects.requireNonNull(requestBody, "requestBody");
return send((req, out) -> out.send(requestBody));
}
@Override
public HttpClientFinalizer sendForm(BiConsumer<? super HttpClientRequest, HttpClientForm> formCallback, @Nullable Consumer<Flux<Long>> progress) {
Objects.requireNonNull(formCallback, "formCallback");
return send((req, out) -> {
HttpClientOperations ops = (HttpClientOperations) out;
return new HttpClientOperations.SendForm(ops, formCallback, progress);
});
}
@Override
protected HttpClient duplicate() {
return new HttpClientFinalizer(new HttpClientConfig(config));
}
@SuppressWarnings("unchecked")
Mono<HttpClientOperations> _connect() {
return (Mono<HttpClientOperations>) connect();
}
static void discard(HttpClientOperations c) {
if (!c.isInboundDisposed()) {
c.discard();
}
}
static final Function<ChannelOperations<?, ?>, Flux<?>> contentReceiver = ChannelOperations::receiveObject;
static final Function<HttpClientOperations, HttpClientResponse> RESPONSE_ONLY = ops -> {
//defer the dispose to avoid over disposing on receive
discard(ops);
return ops;
};
}
| manzhizhen | d76a54ec2fcb18496fd74873769c747c6f7c2160 | 162c11e1d4fcb358ee4c1816e69c102365eb6afd | It was used in `WebsocketFinalizer` | violetagg | 23 |