status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 36,219 | ["airflow/www/static/js/dag/details/taskInstance/taskActions/MarkInstanceAs.tsx"] | "Mark state as..." button options grayed out | ### Apache Airflow version
2.7.3
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Since a few versions ago, the button to mark a task state as success is grayed out when the task is in a success state. Conversely, whenever a task is in a failed state, the mark button as failed is grayed out.

### What you think should happen instead?
This is inconvenient. These buttons bring up another dialog where you may select past/future/downstream/upstream tasks. These tasks may not match the state of the task you currently have selected. Frequently it is useful to be able to set all downstream tasks of an already succeeded task to success.

The current workaround is to first set the task to the opposite of the desired state, then to mark it as the desired state with added past/future/downstream/upstream tasks. This is clunky.
The buttons should not be grayed out depending on the current task state.
### How to reproduce
Mark a task as success. Then try to do it again.
### Operating System
n/a
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/36219 | https://github.com/apache/airflow/pull/36254 | a68b4194fe7201bba0544856b60c7d6724da60b3 | 20d547ecd886087cd89bcdf0015ce71dd0a12cef | "2023-12-14T10:26:39Z" | python | "2023-12-16T14:25:25Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,187 | ["airflow/io/__init__.py", "tests/io/test_path.py"] | Add unit tests to retrieve fsspec from providers including backwards compatibility | ### Body
We currently miss fsspec retrieval for providers and #36186 fixed compatibility issue with it, so we should likely add unit tests covering it.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/36187 | https://github.com/apache/airflow/pull/36199 | 97e8f58673769d3c06bce397882375020a139cee | 6c94ddf2bc123bfc7a59df4ce05f2b4e980f7a15 | "2023-12-12T15:58:07Z" | python | "2023-12-13T17:56:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,132 | ["airflow/providers/google/cloud/operators/cloud_run.py", "tests/providers/google/cloud/operators/test_cloud_run.py"] | Add overrides in the template field for the Google Cloud Run Jobs Execute operator | ### Description
The overrides parameter is not in the list of template field and it's impossible to pass runtime values to Cloud Run (date start/end, custom dag parameters,...)
### Use case/motivation
I would like to use Cloud Run Jobs with DBT and pass Airflow parameters (date start/end) to the Cloud Run jobs. For that, I need to use the the context (**kwargs) in a template field
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/36132 | https://github.com/apache/airflow/pull/36133 | df23df53155c7a3a9b30d206c962913d74ad3754 | 3dddfb4a4ae112544fd02e09a5633961fa725a36 | "2023-12-08T23:54:53Z" | python | "2023-12-11T15:27:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 36,102 | ["airflow/decorators/branch_external_python.py", "airflow/decorators/branch_python.py", "airflow/decorators/branch_virtualenv.py", "airflow/decorators/external_python.py", "airflow/decorators/python_virtualenv.py", "airflow/decorators/short_circuit.py", "airflow/models/abstractoperator.py", "tests/decorators/test_branch_virtualenv.py", "tests/decorators/test_external_python.py", "tests/decorators/test_python_virtualenv.py"] | Using requirements file in VirtualEnvPythonOperation appears to be broken | ### Discussed in https://github.com/apache/airflow/discussions/36076
<div type='discussions-op-text'>
<sup>Originally posted by **timc** December 5, 2023</sup>
### Apache Airflow version
2.7.3
### What happened
When creating a virtual env task and passing in a requirements file like this:
`@task.virtualenv(
use_dill=True,
system_site_packages=False,
requirements='requirements.txt')`
The result is that the contents of the requirements file using to populate the venv is
requirements.txt
Which is wrong. And you get this:
[2023-12-05, 12:33:06 UTC] {{process_utils.py:181}} INFO - Executing cmd: python3 /usr/local/***/.local/lib/python3.10/site-packages/virtualenv /tmp/venv3cdlqjlq
[2023-12-05, 12:33:06 UTC] {{process_utils.py:185}} INFO - Output:
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - created virtual environment CPython3.10.9.final.0-64 in 397ms
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - creator CPython3Posix(dest=/tmp/venv3cdlqjlq, clear=False, no_vcs_ignore=False, global=False)
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/usr/local/***/.local/share/virtualenv)
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - added seed packages: pip==23.3.1, setuptools==69.0.2, wheel==0.42.0
[2023-12-05, 12:33:07 UTC] {{process_utils.py:189}} INFO - activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
[2023-12-05, 12:33:07 UTC] {{process_utils.py:181}} INFO - Executing cmd: /tmp/venv3cdlqjlq/bin/pip install -r /tmp/venv3cdlqjlq/requirements.txt
[2023-12-05, 12:33:07 UTC] {{process_utils.py:185}} INFO - Output:
[2023-12-05, 12:33:09 UTC] {{process_utils.py:189}} INFO - ERROR: Could not find a version that satisfies the requirement requirements.txt (from versions: none)
[2023-12-05, 12:33:09 UTC] {{process_utils.py:189}} INFO - HINT: You are attempting to install a package literally named "requirements.txt" (which cannot exist). Consider using the '-r' flag to install the packages listed in requirements.txt
[2023-12-05, 12:33:09 UTC] {{process_utils.py:189}} INFO - ERROR: No matching distribution found for requirements.txt
[2023-12-05, 12:33:09 UTC] {{taskinstance.py:1824}} ERROR - Task failed with exception
The issue appears to be that the requirements parameter is added to a list on construction of the operator so the templating never happens.
### What you think should happen instead
The provided requirements file should be used in the pip command to set up the venv.
### How to reproduce
Create a dag:
```
from datetime import datetime
from airflow.decorators import dag, task
@dag(schedule_interval=None, start_date=datetime(2021, 1, 1), catchup=False, tags=['example'])
def virtualenv_task():
@task.virtualenv(
use_dill=True,
system_site_packages=False,
requirements='requirements.txt',
)
def extract():
import pandas
x = pandas.DataFrame()
extract()
dag = virtualenv_task()
```
And a requirements.txt file
```
pandas
```
Run AirFlow
### Operating System
Ubuntu 23.04
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.2.0
apache-airflow-providers-celery==3.2.1
apache-airflow-providers-common-sql==1.5.2
apache-airflow-providers-ftp==3.4.2
apache-airflow-providers-http==4.4.2
apache-airflow-providers-imap==3.2.2
apache-airflow-providers-postgres==5.5.1
apache-airflow-providers-sqlite==3.4.2
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Everytime.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/36102 | https://github.com/apache/airflow/pull/36103 | 76d26f453000aa67f4e755c5e8f4ccc0eac7b5a4 | 3904206b69428525db31ff7813daa0322f7b83e8 | "2023-12-07T06:49:53Z" | python | "2023-12-07T09:19:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,949 | ["airflow/dag_processing/manager.py", "airflow/dag_processing/processor.py", "airflow/migrations/versions/0133_2_8_0_add_processor_subdir_import_error.py", "airflow/models/errors.py", "airflow/utils/db.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/dag_processing/test_job_runner.py"] | dag processor deletes import errors of other dag processors thinking the files don't exist | ### Apache Airflow version
main (development)
### What happened
When dag processor starts with a sub directory to process then the import errors are recorded with that path. So when there is processor for airflow-dag-processor-0 folder in order to remove import errors it lists all files under airflow-dag-processor-0 folder and deletes those not present. This becomes an issue when there is airflow-dag-processor-1 that records import errors whose files won't be part of airflow-dag-processor-0 folder.
### What you think should happen instead
The fix would be to have processor_subdir stored in ImportError table so that during querying we only look at import errors relevant to the dag processor and don't delete other items. A fix similar to https://github.com/apache/airflow/pull/33357 needs to be applied for import errors as well.
### How to reproduce
1. create a dag file with import error at `~/airflow/dags/airflow-dag-processor-0/sample_sleep.py` . Start a dag processor with -S to process "~/airflow/dags/airflow-dag-processor-0/" . Import error should be present.
2. create a dag file with import error at `~/airflow/dags/airflow-dag-processor-1/sample_sleep.py` . Start a dag processor with -S to process "~/airflow/dags/airflow-dag-processor-1/". Import error for airflow-dag-processor-0 is deleted.
3.
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.decorators import task
from datetime import timedelta, invalid
with DAG(
dag_id="task_duration",
start_date=datetime(2023, 1, 1),
catchup=True,
schedule_interval="@daily",
) as dag:
@task
def sleeper():
pass
sleeper()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35949 | https://github.com/apache/airflow/pull/35956 | 9c1c9f450e289b40f94639db3f0686f592c8841e | 1a3eeab76cdb6d0584452e3065aee103ad9ab641 | "2023-11-29T11:06:51Z" | python | "2023-11-30T13:29:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,911 | ["airflow/providers/apache/spark/hooks/spark_submit.py", "airflow/providers/apache/spark/operators/spark_submit.py", "tests/providers/apache/spark/operators/test_spark_submit.py"] | Adding Support for Yarn queue and other extras in SparkSubmit Operator and Hook | ### Description
Spark-submit
--queue thequeue option specifies the YARN queue to which the application should be submitted.
more - https://spark.apache.org/docs/3.2.0/running-on-yarn.html
### Use case/motivation
The --queue option is particularly useful in a multi-tenant environment where different users or groups have allocated resources in specific YARN queues.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35911 | https://github.com/apache/airflow/pull/36151 | 4c73d613b11107eb8ee3cc70fe6233d5ee3a0b29 | 1b4a7edc545be6d6e9b8f00c243beab215e562b7 | "2023-11-28T09:05:59Z" | python | "2023-12-13T14:54:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,889 | ["airflow/www/static/js/dag/details/taskInstance/Logs/index.tsx"] | New logs tab is broken for tasks with high retries | ### Apache Airflow version
2.7.3
### What happened
One of our users had high number of retries around 600 and the operator was like a sensor that retries on failure till retry limit is reached. The new log page renders the log tab to the bottom making it unusable. In the old page there is still a display of buttons for all retry but scrolling is enabled. To fix this we had to change log from buttons to a drop down where attempt can be selected placing the dropdown before the element to select log level. This is an edge case but we thought to file anyway in case someone is facing this. We are happy to upstream to one of the selected below solutions :
1. Using dropdown on high number of attempts like after 50 and falling back to buttons. But this is a UX change to use button in one case and dropdown in another that user needs to be educated.
2. Always using dropdown despite low number of attempts with default of latest attempt.
Attaching sample dag code that could lead to this scenario.
Sample scenario :

### What you think should happen instead
_No response_
### How to reproduce
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.decorators import task
from airflow.models.param import Param
from airflow.operators.empty import EmptyOperator
from datetime import timedelta
with DAG(
dag_id="retry_ui_issue",
start_date=datetime(2023, 1, 1),
catchup=False,
schedule_interval="@once",
) as dag:
@task(retries=400, retry_delay=timedelta(seconds=1))
def fail_always():
raise Exception("fail")
fail_always()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35889 | https://github.com/apache/airflow/pull/36025 | 9c168b76e8b0c518b75a6d4226489f68d7a6987f | fd0988369b3a94be01a994e46b7993e2d97b2028 | "2023-11-27T13:31:33Z" | python | "2023-12-03T01:09:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,874 | ["airflow/providers/common/sql/doc/adr/0001-record-architecture-decisions.md", "airflow/providers/common/sql/doc/adr/0002-return-common-data-structure-from-dbapihook-derived-hooks.md", "scripts/ci/pre_commit/pre_commit_check_providers_subpackages_all_have_init.py"] | Document the purpose of having common.sql | ### Body
The Common.sql package was created in order to provide a common interface for DBApiHooks to return the data that will be universally used in a number of cases:
* CommonSQL Operators and Sensors
* (future) lineage data where returned hook results can follow the returned data for column lineage information
This should be better documentedi in common.sql that this is the goal that common.sql achieves
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35874 | https://github.com/apache/airflow/pull/36015 | ef5eebdb26ca9ddb49c529625660b72b6c9b55b4 | 3bb5978e63f3be21a5bb7ae89e7e3ce9d06a4ab8 | "2023-11-26T23:11:48Z" | python | "2023-12-06T20:36:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,812 | ["docs/apache-airflow/howto/docker-compose/docker-compose.yaml", "docs/apache-airflow/howto/docker-compose/index.rst"] | Add path to airflow.cfg in docker-compose.yml | ### Description
Adding a commented line in compose file like `- ${AIRFLOW_PROJ_DIR:-.}/airflow.cfg:/opt/airflow/airflow.cfg ` would save new users tons of time when customizing the configuration file. Also the current default bind `- ${AIRFLOW_PROJ_DIR:-.}/config:/opt/airflow/config` is misleading where file airflow.cfg should be stored in the container.
Another solution is to simply add similar information here https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html
### Use case/motivation
I was setting up email notifications and didn’t understand why SMTP server configuration from airflow.cfg didn’t work
### Related issues
https://github.com/puckel/docker-airflow/issues/338
https://forum.astronomer.io/t/airflow-up-and-running-but-airflow-cfg-file-not-found/1931
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35812 | https://github.com/apache/airflow/pull/36289 | aed3c922402121c64264654f8dd77dbfc0168cbb | 36cb20af218919bcd821688e91245ffbe3fcfc16 | "2023-11-23T10:03:05Z" | python | "2023-12-19T12:49:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,703 | ["airflow/providers/amazon/aws/operators/ec2.py", "docs/apache-airflow-providers-amazon/operators/ec2.rst", "tests/providers/amazon/aws/operators/test_ec2.py", "tests/system/providers/amazon/aws/example_ec2.py"] | Add EC2RebootInstanceOperator and EC2HibernateInstanceOperator to Amazon Provider | ### Description
The Amazon Airflow Provider lacks operators for "Reboot Instance" and "Hibernate Instance," two states available in the AWS UI. Achieving feature parity would provide a seamless experience, aligning Airflow with AWS capabilities.
I'd like to see the EC2RebootInstanceOperator and EC2HibernateInstanceOperator added to Amazon Provider.
### Use case/motivation
This enhancement ensures users can manage EC2 instances in Airflow the same way they do in the AWS UI.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35703 | https://github.com/apache/airflow/pull/35790 | ca97feed1883dc8134404b017d7f725a4f1010f6 | ca1202fd31f0ea8c25833cf11a5f7aa97c1db87b | "2023-11-17T14:23:00Z" | python | "2023-11-23T17:58:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,699 | ["tests/conftest.py", "tests/providers/cncf/kubernetes/executors/test_kubernetes_executor.py", "tests/providers/openlineage/extractors/test_bash.py", "tests/providers/openlineage/extractors/test_python.py", "tests/serialization/test_serde.py", "tests/utils/log/test_secrets_masker.py"] | Flaky TestSerializers.test_params test | ### Body
Recently we started to have a flaky TestSerializers.test_params
This seems to be a problem in either the tests or implementation of `serde` - seems like discovery of classes that are serializable in some cases is not working well while the import of serde happens.
It happens rarely and it's not easy to reproduce locallly, by a quick look it might be a side effect from another test - I have a feeling that when tests are run, some other test might leave behind a thread that cleans the list of classes that have been registered with serde and that cleanup happens somewhat randomly.
cc: @bolkedebruin - maybe you can take a look or have an idea where it can come from - might be fastest for you as you know the discovery mechanism best and you wrote most of the tests there ? Maybe there are some specially crafted test cases somewhere that do a setup/teardown or just cleanup of the serde-registered classes that could cause such an effect?
Example error: https://github.com/apache/airflow/actions/runs/6898122803/job/18767848684?pr=35693#step:5:754
Error:
```
_________________________ TestSerializers.test_params __________________________
[gw3] linux -- Python 3.8.18 /usr/local/bin/python
self = <tests.serialization.serializers.test_serializers.TestSerializers object at 0x7fb113165550>
def test_params(self):
i = ParamsDict({"x": Param(default="value", description="there is a value", key="test")})
e = serialize(i)
> d = deserialize(e)
tests/serialization/serializers/test_serializers.py:173:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
o = {'__classname__': 'airflow.models.param.ParamsDict', '__data__': {'x': 'value'}, '__version__': 1}
full = True, type_hint = None
def deserialize(o: T | None, full=True, type_hint: Any = None) -> object:
"""
Deserialize an object of primitive type and uses an allow list to determine if a class can be loaded.
:param o: primitive to deserialize into an arbitrary object.
:param full: if False it will return a stringified representation
of an object and will not load any classes
:param type_hint: if set it will be used to help determine what
object to deserialize in. It does not override if another
specification is found
:return: object
"""
if o is None:
return o
if isinstance(o, _primitives):
return o
# tuples, sets are included here for backwards compatibility
if isinstance(o, _builtin_collections):
col = [deserialize(d) for d in o]
if isinstance(o, tuple):
return tuple(col)
if isinstance(o, set):
return set(col)
return col
if not isinstance(o, dict):
# if o is not a dict, then it's already deserialized
# in this case we should return it as is
return o
o = _convert(o)
# plain dict and no type hint
if CLASSNAME not in o and not type_hint or VERSION not in o:
return {str(k): deserialize(v, full) for k, v in o.items()}
# custom deserialization starts here
cls: Any
version = 0
value: Any = None
classname = ""
if type_hint:
cls = type_hint
classname = qualname(cls)
version = 0 # type hinting always sets version to 0
value = o
if CLASSNAME in o and VERSION in o:
classname, version, value = decode(o)
if not classname:
raise TypeError("classname cannot be empty")
# only return string representation
if not full:
return _stringify(classname, version, value)
if not _match(classname) and classname not in _extra_allowed:
> raise ImportError(
f"{classname} was not found in allow list for deserialization imports. "
f"To allow it, add it to allowed_deserialization_classes in the configuration"
)
E ImportError: airflow.models.param.ParamsDict was not found in allow list for deserialization imports. To allow it, add it to allowed_deserialization_classes in the configuration
airflow/serialization/serde.py:246: ImportError
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35699 | https://github.com/apache/airflow/pull/35746 | 4d72bf1a89d07d34d29b7899a1f3c61abc717486 | 7e7ac10947554f2b993aa1947f8e2ca5bc35f23e | "2023-11-17T11:14:33Z" | python | "2023-11-20T08:24:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,599 | ["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "tests/providers/cncf/kubernetes/executors/test_kubernetes_executor.py"] | Kubernetes Executor List Pods Performance Improvement | ### Apache Airflow version
main (development)
### What happened
_list_pods function uses kube list_namespaced_pod and list_pod_for_all_namespaces kube functions. Right now, these Kube functions will get the entire pod spec though we are interested in the pod metadata alone. This _list_pods is refered in clear_not_launched_queued_tasks. try_adopt_task_instances and _adopt_completed_pods functions.
When we run the airflow at large scale (with worker pods of more than > 500). The _list_pods function takes a significant amount of time (upto 15 - 30 seconds with 500 worker pods) due to unnecessary data transfer (V1PodList up to a few 10 MBs) and JSON deserialization overhead. This is blocking us from scaling the airflow to run at large scale
### What you think should happen instead
Request the Pod metadata instead of entire Pod payload. It will help to reduce significant network data transfer and JSON deserialization overhead.
### How to reproduce
I have reproduced the performance issue while running 500 concurrent jobs. Monitor kubernetes_executor.clear_not_launched_queued_tasks.duration and kubernetes_executor.adopt_task_instances.duration metrics.
### Operating System
CentOS 6
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes
### Deployment
Other Docker-based deployment
### Deployment details
Terraform based airflow deployment
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35599 | https://github.com/apache/airflow/pull/36092 | 8d0c5d900875ce3b9dda1a86f1de534759e9d7f6 | b9c574c61ae42481b9d2c9ce7c42c93dc44b9507 | "2023-11-13T12:06:28Z" | python | "2023-12-10T11:49:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,526 | ["airflow/api_internal/endpoints/rpc_api_endpoint.py", "airflow/cli/commands/task_command.py", "airflow/jobs/local_task_job_runner.py", "airflow/models/taskinstance.py", "airflow/serialization/pydantic/dag.py", "airflow/serialization/pydantic/dag_run.py", "airflow/serialization/pydantic/taskinstance.py", "tests/serialization/test_pydantic_models.py"] | AIP-44 Migrate TaskInstance._run_task_by_local_task_job to Internal API | null | https://github.com/apache/airflow/issues/35526 | https://github.com/apache/airflow/pull/35527 | 054904bb9a68eb50070a14fe7300cb1e78e2c579 | 3c0a714cb57894b0816bf39079e29d79ea0b1d0a | "2023-11-08T12:23:53Z" | python | "2023-11-15T18:41:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,335 | ["airflow/www/extensions/init_security.py", "tests/www/views/test_session.py"] | Infinite UI redirection loop when user is changed to "inactive" while having a session opened | ### Body
When user is logged in with a valid session and deactivated, refreshing the browser/reusing the session leads to an infinite redirection loop (which is stopped quickly by browser detecting the situation).
## Steps to Produce:
Make sure you are using two different browsers.
In Browser A:
Login as the normal user.
In Browser B:
1. Login as admin.
2. Go to Security > List Users
3. Disable a user by unchecking this box:

4. Now in browser A, refresh the page.
You'll see a message like this:

In the server logs, you'll see that a lot of requests have been made to the server.

# Expected behaviour
There should be no infinite redirection, but the request for the inactive user should be rejected and the user should be redirected to the login page.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35335 | https://github.com/apache/airflow/pull/35486 | 3fbd9d6b18021faa08550532241515d75fbf3b83 | e512a72c334708ff5d839e16ba8dc5906c744570 | "2023-11-01T09:31:27Z" | python | "2023-11-07T19:59:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,288 | ["airflow/www/static/js/dag/details/gantt/GanttTooltip.tsx", "airflow/www/static/js/dag/details/gantt/Row.tsx"] | Incorrect queued duration for deferred tasks in gantt view | ### Apache Airflow version
main (development)
### What happened
Gantt view calculates the diff between start date and queued at values to show queued duration. In case of deferred tasks that tasks get re-queued when the triggerer returns an event causing queued at to be greater than start date. This causes incorrect values to be shown in the UI. I am not sure how to fix this. Maybe queued duration can be not shown on the tooltip when queued time is greater than start time.

### What you think should happen instead
_No response_
### How to reproduce
1. Trigger the below dag
2. `touch /tmp/a` to ensure triggerer returns an event.
3. Check for queued duration value in gantt view.
```python
from __future__ import annotations
from datetime import datetime
from airflow import DAG
from airflow.models.baseoperator import BaseOperator
from airflow.triggers.file import FileTrigger
class FileCheckOperator(BaseOperator):
def __init__(self, filepath, **kwargs):
self.filepath = filepath
super().__init__(**kwargs)
def execute(self, context):
self.defer(
trigger=FileTrigger(filepath=self.filepath),
method_name="execute_complete",
)
def execute_complete(self, context, event=None):
pass
with DAG(
dag_id="file_trigger",
start_date=datetime(2021, 1, 1),
catchup=False,
schedule_interval=None,
) as dag:
t1 = FileCheckOperator(task_id="t1", filepath="/tmp/a")
t2 = FileCheckOperator(task_id="t2", filepath="/tmp/b")
t1
t2
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35288 | https://github.com/apache/airflow/pull/35984 | 1264316fe7ab15eba3be6c985a28bb573c85c92b | 0376e9324af7dfdafd246e31827780e855078d68 | "2023-10-31T03:52:56Z" | python | "2023-12-05T14:03:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,261 | ["airflow/providers/atlassian/jira/notifications/__init__.py", "airflow/providers/atlassian/jira/notifications/jira.py", "airflow/providers/atlassian/jira/provider.yaml", "docs/apache-airflow-providers-atlassian-jira/index.rst", "docs/apache-airflow-providers-atlassian-jira/notifications/index.rst", "docs/apache-airflow-providers-atlassian-jira/notifications/jira-notifier-howto-guide.rst", "tests/providers/atlassian/jira/notifications/__init__.py", "tests/providers/atlassian/jira/notifications/test_jira.py"] | Add `JiraNotifier` | ### Body
Similar to the [notifiers we already have](https://airflow.apache.org/docs/apache-airflow-providers/core-extensions/notifications.html) we want to add `JiraNotifier` to cut a Jira ticket.
This is very useful to be set with `on_failure_callback`.
You can view other PRs that added similar functionality: [ChimeNotifier](https://github.com/apache/airflow/pull/32222), [SmtpNotifier](https://github.com/apache/airflow/pull/31359)
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35261 | https://github.com/apache/airflow/pull/35397 | ce16963e9d69849309aa0a7cf978ed85ab741439 | 110bb0e74451e3106c4a5567a00453e564926c50 | "2023-10-30T06:53:23Z" | python | "2023-11-17T16:22:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,199 | ["airflow/models/dag.py", "airflow/models/dagrun.py", "tests/models/test_dag.py", "tests/providers/google/cloud/sensors/test_gcs.py"] | Relax mandatory requirement for `start_date` when `schedule=None` | ### Body
Currently `start_date` is mandatory parameter.
For DAGs with `schedule=None` we can relax this requirement as no scheduling calculation needed so the `start_date` parameter isn't used.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/35199 | https://github.com/apache/airflow/pull/35356 | 16585b178fab53b7c6d063426105664e55b14bfe | 930f165db11e611887275dce17f10eab102f0910 | "2023-10-26T15:04:53Z" | python | "2023-11-28T06:14:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,137 | ["airflow/providers/amazon/aws/transfers/http_to_s3.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/transfer/http_to_s3.rst", "tests/providers/amazon/aws/transfers/test_http_to_s3.py", "tests/system/providers/amazon/aws/example_http_to_s3.py"] | Add HttpToS3Operator | ### Description
This operator allows users to effortlessly transfer data from HTTP sources to Amazon S3, with minimal coding effort. Whether you need to ingest web-based content, receive data from external APIs, or simply move data from a web resource to an S3 bucket, the HttpToS3Operator simplifies the process, enabling efficient data flow and integration in a wide range of use cases.
### Use case/motivation
The motivation for introducing the HttpToS3Operator stems from the need to streamline data transfer and integration between HTTP sources and Amazon S3. While the SimpleHttpOperator has proven to be a valuable tool for executing HTTP requests, it has certain limitations, particularly in scenarios where users require data to be efficiently stored in an Amazon S3 bucket.
### Related issues
Only issue that mentions this operator is [here](https://github.com/apache/airflow/pull/22758#discussion_r849820953)
### Are you willing to submit a PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35137 | https://github.com/apache/airflow/pull/35176 | e3b3d786787597e417f3625c6e9e617e4b3e5073 | 86640d166c8d5b3c840bf98e5c6db0d91392fde3 | "2023-10-23T16:44:15Z" | python | "2023-10-26T10:56:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,131 | ["docs/apache-airflow/security/webserver.rst"] | Support for general OIDC providers or making it clear in document | ### Description
I tried hard to configure airflow with authentik OIDC but airflow kept complaining about empty userinfo. There are very limited tutorials online. After reading some source code of authlib, flask-appbuilder and airflow, I found in [airflow/airflow/auth/managers/fab/security_manager/override.py](https://github.com/apache/airflow/blob/ef497bc3412273c3a45f43f40e69c9520c7cc74c/airflow/auth/managers/fab/security_manager/override.py) that only a selection of providers are supported (github twitter linkedin google azure openshift okta keycloak). If the provider name is not within this list, it will always return an empty userinfo at [line 1475](https://github.com/apache/airflow/blob/ef497bc3412273c3a45f43f40e69c9520c7cc74c/airflow/auth/managers/fab/security_manager/override.py#L1475C22-L1475C22).
For others who try to integrate openid connect, I would recommend read the code in [airflow/airflow/auth/managers/fab/security_manager/override.py starting from line 1398](https://github.com/apache/airflow/blob/ef497bc3412273c3a45f43f40e69c9520c7cc74c/airflow/auth/managers/fab/security_manager/override.py#L1398)
### Use case/motivation
This behaviour should be documented. Otherwise, there should be a way to configure other OIDC providers like other projects that support OIDC in a general manner.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35131 | https://github.com/apache/airflow/pull/35237 | 554e3c9c27d76280d131d1ddbfa807d7b8006943 | 283fb9fd317862e5b375dbcc126a660fe8a22e11 | "2023-10-23T14:32:49Z" | python | "2023-11-01T23:35:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,095 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | Assigning not json serializable value to dagrun.conf cause an error in UI | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Greetings!
Recently I’ve faced a problem. It seems that assigning object, which can’t be serialized to JSON, to the dag_run.conf dict cause critical errors with UI.
After executing code example in "How to reproduce":
Grid representation of the DAG breaks with following result:
<img width="800" alt="Pasted Graphic" src="https://github.com/apache/airflow/assets/79107237/872ccde5-500f-4484-a36c-dce6b7112286">
Browse -> DAG Runs also becomes unavailable.
<img width="800" alt="Pasted Graphic 1" src="https://github.com/apache/airflow/assets/79107237/b4e3df0c-5324-41dd-96f3-032e706ab7a9">
Dag itself continues to work correctly, this affects only UI graph and dagrun/list/
I suggest to use custom dictionary with restriction on setting non json values.
### What you think should happen instead
Raise an error
### How to reproduce
Execute following task.
Composer 2 version 2.4.2
Airflow version 2.5.3
```
@task
def test_task(**context):
context['dag_run'].conf["test"] = np.int64(1234)
```
### Operating System
Ubuntu 20.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Google Cloud Composer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35095 | https://github.com/apache/airflow/pull/35096 | 9ae57d023b84907c6c6ec62a7d43f2d41cb2ebca | 84c40a7877e5ea9dbee03b707065cb590f872111 | "2023-10-20T23:09:11Z" | python | "2023-11-14T20:46:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,074 | ["airflow/www/static/js/dag/grid/index.tsx"] | Grid UI Scrollbar / Cell Click Issue | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
This is in 2.5.2, we're in the midst of upgrading to to 2.7 but haven't tested thoroughly if this happens there as we don't have the volume of historic runs.
On the DAG main landing page, if you have a long DAG with multiple sub-groups, and a number of runs recorded, the UI in it's default listing of `25` previous runs, causes the scrollbar for the grid to overlay the right-most column, making it impossible to click on the cells for the right-most DAG run.

This is specifically in Firefox.
It's not world-ending, just a bit annoying at times.
### What you think should happen instead
The Grid area should have enough pad to the right of the rightmost column to clear the scrollbar area.
You can get around this by altering the number of runs down to 5 in the dropdown above the grid, this seems to fix the issue in order access the cells.
### How to reproduce
This seems to be the scenario it happens under:
- DAG with long list of tasks, including sub-groups, and moderately long labels
- Show 25 runs on the dag screen

### Operating System
Ubuntu 22.04 / macOS
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Custom deployment.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35074 | https://github.com/apache/airflow/pull/35346 | bcb5eebd6247d4eec15bf5cce98ccedaad629661 | b06c4b0f04122b5f7d30db275a20f7f254c02bef | "2023-10-20T08:39:40Z" | python | "2023-11-22T16:58:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 35,062 | ["docs/apache-airflow/core-concepts/dags.rst"] | Task dependency upstream/downstream setting error | ### What do you see as an issue?
I'm using Airflow 2.7.2 and following the [documentation](https://airflow.apache.org/docs/apache-airflow/2.7.2/core-concepts/dags.html#task-dependencies) to define the dependency relationship relationship between tasks. I tried the explicit way suggested by the doc but it failed.
```
first_task.set_downstream(second_task, third_task)
third_task.set_upstream(fourth_task)
```
### Solving the problem
It seems it doesn't work if we want to attach multiple tasks downstream to one in a one-line manner. So I suggest currently we should break it down. Or resolve it.
```
first_task.set_downstream(second_task)
first_task.set_downstream(third_task)
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/35062 | https://github.com/apache/airflow/pull/35075 | 551886eb263c8df0b2eee847dd6725de78bc25fc | a4ab95abf91aaff0aaf8f0e393a2346f5529a6d2 | "2023-10-19T16:19:26Z" | python | "2023-10-20T09:27:27Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,953 | ["airflow/operators/trigger_dagrun.py", "tests/operators/test_trigger_dagrun.py"] | TriggerDagRunOperator does not trigger dag on subsequent run even with reset_dag_run=True | ### Discussed in https://github.com/apache/airflow/discussions/24548
<div type='discussions-op-text'>
<sup>Originally posted by **don1uppa** June 16, 2022</sup>
### Apache Airflow version
2.2.5
### What happened
I have a dag that starts another dag with a conf.
I am attempting to start the initiating dag a second time with different configuration parameters. I it to start the other dag with the new parameters.
## What you think should happen instead
It should use the new conf when starting the called dag
### How to reproduce
See code in squsequent message
### Operating System
windows and ubuntu
### Versions of Apache Airflow Providers
N/A
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
**Work around for now is to delete the previous "child" dag runs.**
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/34953 | https://github.com/apache/airflow/pull/35429 | ea8eabc1e7fc3c5f602a42d567772567b4be05ac | 94f9d798a88e76bce3e42da9d2da7844ecf7c017 | "2023-10-15T14:49:00Z" | python | "2023-11-04T15:44:30Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,906 | ["airflow/utils/db_cleanup.py", "tests/utils/test_db_cleanup.py"] | Clear old Triggers when Triggerer is not running | ### Apache Airflow version
main (development)
### What happened
When a deferrable operator or sensor is run without a Triggerer process, the task gets stuck in the deferred state, and will eventually fail. A banner will show up in the home page saying that the Triggerer is not running. There is no way to remove this message. In the Triggers menu, the trigger that activated is listed there, and there is no way to remove that Trigger from that list.
### What you think should happen instead
If the trigger fails, the trigger should be removed from the Trigger menu, and the message should go away.
### How to reproduce
The bug occurs when no Triggerer is running.
In order to reproduce,
1) Run any DAG that uses a deferrable operator or sensor.
2) Allow the task to reach the deferred state.
3) Allow the task to fail on its own (i.e. timeout), or mark it as success or failure.
A message will show up on the DAGs page that the Triggerer is dead. This message does not go away
```
The triggerer does not appear to be running. Last heartbeat was received 6 minutes ago.
Triggers will not run, and any deferred operator will remain deferred until it times out and fails.
```
A Trigger will show up in the Triggers menu.
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Using breeze for testing
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34906 | https://github.com/apache/airflow/pull/34908 | ebcb16201af08f9815124f27e2fba841c2b9cd9f | d07e66a5624faa28287ba01aad7e41c0f91cc1e8 | "2023-10-13T04:46:24Z" | python | "2023-10-30T17:09:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,889 | ["airflow/providers/amazon/aws/operators/ecs.py", "tests/providers/amazon/aws/operators/test_ecs.py"] | EcsRunTaskOperator -`date value out of range` on deferrable execution - default waiter_max_attempts | ### Apache Airflow version
2.7.1
### What happened
Trying to test **EcsRunTaskOperator** in deferrable mode resulted in an unexpected error at the `_start_task()` step of the Operator's `execute` method. The return error log was
`{standard_task_runner.py:104} ERROR - Failed to execute job 28 for task hello-world-defer (date value out of range; 77)`
After a lot of research to understand the `date value out of range` specific error, I found [this PR](https://github.com/apache/airflow/pull/33712) in which I found from the [change log](https://github.com/apache/airflow/pull/33712/files#diff-4dba25d07d7d8c4cb47ef85e814f123c9171072b240d605fffd59b29ee3b31eb) that the `waiter_max_attempts` was switched to `1000000 * 365 * 24 * 60 * 10` (Which results in 1M years). This change cannot work properly with an internal Airflow date calculation, related to the Waiter's retries.
### What you think should happen instead
Unfortunately, I haven't been able to track the error further but by changing to a lower limit of 100000 waiter_max_attempts it worked as expected.
My suggestion would be to decrease the default value of **waiter_max_attempts**, maybe 1000000 (1M) retries is a valid number of retries. These results will set the default value of the expected running attempt time to 1000000 * 6 ~ 70days
### How to reproduce
By keeping the default values of **EcsRunTaskOperator** while trying to use it in deferrable mode.
### Operating System
Debian
### Versions of Apache Airflow Providers
apache-airflow-providers-airbyte==3.3.2
apache-airflow-providers-amazon==8.7.1
apache-airflow-providers-celery==3.3.4
apache-airflow-providers-common-sql==1.7.2
apache-airflow-providers-docker==3.7.5
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-http==4.5.2
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-postgres==5.6.1
apache-airflow-providers-redis==3.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.2.1
### Deployment
Other Docker-based deployment
### Deployment details
- Custom Deploy using ECS and Task Definition Services on EC2 for running AIrflow services.
- Extending Base Airflow Image to run on each Container Service (_apache/airflow:latest-python3.11_)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34889 | https://github.com/apache/airflow/pull/34928 | b1196460db1a21b2c6c3ef2e841fc6d0c22afe97 | b392f66c424fc3b8cbc957e02c67847409551cab | "2023-10-12T12:29:50Z" | python | "2023-10-16T20:27:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,878 | ["chart/templates/redis/redis-statefulset.yaml", "chart/values.schema.json", "chart/values.yaml", "helm_tests/other/test_redis.py"] | [helm] Redis does not include priorityClassName key | ### Official Helm Chart version
1.11.0 (latest released)
### Apache Airflow version
2.x
### Kubernetes Version
1.25+
### What happened
There is no way to configure via parent values the priorityClassName for Redis, which is a workload with PV constraints that usually needs increased priority to be scheduled wherever its PV lives.
### What you think should happen instead
Able to include priorityClassName
### How to reproduce
Install via Helm Chart
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34878 | https://github.com/apache/airflow/pull/34879 | 6f3d294645153db914be69cd2b2a49f12a18280c | 14341ff6ea176f2325ebfd3f9b734a3635078cf4 | "2023-10-12T07:06:43Z" | python | "2023-10-28T07:51:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,877 | ["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | Scheduler is spending most of its time in clear_not_launched_queued_tasks function | ### Apache Airflow version
2.7.1
### What happened
Airflow running the clear_not_launched_queued_tasks function on a certain frequency (default 30 seconds). When we run the airflow on a large Kube cluster (pods more than > 5K). Internally the clear_not_launched_queued_tasks function loops through each queued task and checks the corresponding worker pod existence in the Kube cluster. Right this existence check using list pods Kube API. The API is taking more than 1s. if there are 120 queued tasks, then it will take ~ 120 seconds (1s * 120). So, this leads the scheduler to spend most of its time in this function rather than scheduling the tasks. It leads to none of the jobs being scheduled or degraded scheduler performance.
### What you think should happen instead
It would be nice to get all the airflow worker pods in a one/few batch calls rather than for each task. These batch calls helps to speed the processing of clear_not_launched_queued_tasks function call.
### How to reproduce
Run the airflow on large Kube clusters (> 5K pods). Simulate the airflow to run the 100 parallel DAG runs for every minute.
### Operating System
Cent OS 7
### Versions of Apache Airflow Providers
2.3.3, 2.7.1
### Deployment
Other Docker-based deployment
### Deployment details
Terraform based airflow deployment
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34877 | https://github.com/apache/airflow/pull/35579 | 5a6dcfd8655c9622f3838a0e66948dc3091afccb | cd296d2068b005ebeb5cdc4509e670901bf5b9f3 | "2023-10-12T06:03:28Z" | python | "2023-11-12T17:41:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,838 | ["airflow/providers/apache/spark/hooks/spark_submit.py", "airflow/providers/apache/spark/operators/spark_submit.py", "tests/providers/apache/spark/operators/test_spark_submit.py"] | Adding property files option in the Spark Submit command | ### Description
spark-submit command has one of the options to pass the properties file as argument. Instead of loading multiple key, value via --conf option, this will help to load extra properties from the file path. While we have support for most of the arguments supported in spark-submit command in SparkSubmitOperator, this one `property-files` is missing. Could that be included?
```[root@airflow ~]# spark-submit --help
Usage: spark-submit [options] <app jar | python file | R file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]
Options:
--conf, -c PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
```
### Use case/motivation
Add the property-files as one of the options to pass in the SparkSubmitOperator to load the extra config properties as file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34838 | https://github.com/apache/airflow/pull/36164 | 3dddfb4a4ae112544fd02e09a5633961fa725a36 | 195abf8f7116c9e37fd3dc69bfee8cbf546c5a3f | "2023-10-09T16:21:55Z" | python | "2023-12-11T16:32:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,816 | ["airflow/cli/commands/triggerer_command.py"] | Airflow 2.7.1 can not start Scheduler & trigger | ### Apache Airflow version
2.7.1
### What happened
After upgrade from 2.6.0 to 2.7.1 (try pip uninstall apache-airflow, and clear dir airflow - remove airflow.cfg), I can start scheduler & trigger with daemon.
I try start with command, it can start, but logout console it killed.
I try: airflow scheduler or airflow triggerer :done but kill when logout console
airflow scheduler --daemon && airflow triggerer --daemon: fail, can not start scheduler & triggerer (2.6.0 run ok). but start deamon with webserver & celery worker is fine
Help me
### What you think should happen instead
_No response_
### How to reproduce
1. run airflow 2.6.0 fine on ubuntu server 22.04.3 lts
2. install airflow 2.7.1
3. can not start daemon triggerer & scheduler
### Operating System
ubuntu server 22.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34816 | https://github.com/apache/airflow/pull/34931 | b067051d3bcec36187c159073ecebc0fc048c99b | 9c1e8c28307cc808739a3535e0d7901d0699dcf4 | "2023-10-07T17:18:30Z" | python | "2023-10-14T15:56:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,799 | ["airflow/providers/postgres/hooks/postgres.py", "docs/apache-airflow-providers-postgres/connections/postgres.rst"] | Airflow postgres connection field schema points to database name | ### Apache Airflow version
2.7.1
### What happened
Airflow's postgres connection configuration form has a field called 'schema' which is misguiding as values mentioned here is used to refer to the database name instead of the schema name. It should be correctly named to 'database' or 'dbname'
### What you think should happen instead
_No response_
### How to reproduce
create a connection on the web UI and choose connection type as postgres.
Have a dag connect to an postgres server with multiple databases
provide the database name in the 'schema' field of the connection form- this would work if nothing else is incorrect in the etl
now change the value in the schema field of the connection form to refer to a schema- this will fail unexpectedly as the schema name field actually points to the database name.
### Operating System
Windows and Linux
### Versions of Apache Airflow Providers
2.71
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34799 | https://github.com/apache/airflow/pull/34811 | 530ebb58b6b85444b62618684b5741b9d6dd715e | 39cbd6b231c75ec432924d8508f15a4fe3c68757 | "2023-10-06T09:24:48Z" | python | "2023-10-08T19:24:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,795 | ["airflow/www/jest-setup.js", "airflow/www/static/js/api/index.ts", "airflow/www/static/js/dag/nav/FilterBar.tsx", "airflow/www/static/js/dag/useFilters.test.tsx", "airflow/www/static/js/dag/useFilters.tsx", "airflow/www/views.py", "tests/www/views/test_views_grid.py"] | Support multi-select state filtering on grid view | ### Description
Replace the existing selects with multi-selects that allow you to filter multiple DAG run states and types at the same time, somewhat similar to my prototype:

### Use case/motivation
I'm not sure if it is just me, but sometimes I wish I was able to show multiple DAG run states, especially `running` and `failed`, at the same time.
This would be especially helpful for busy DAGs on which I want to clear a few failed runs. Without the multi-select, if I switch from `failed` to `running` DAG runs, I need to orient myself again to find the run I just cleared (assuming there are lots of other running DAG runs). _With_ the multi-select, the DAG run I just cleared stays in the same spot and I can check the logs, while clearing some other failed runs.
I'm not sure we need a multi-select for DAG run types as well. I'd tend to say no, but maybe someone else has a use-case for that.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34795 | https://github.com/apache/airflow/pull/35403 | 1c6bbe2841fe846957f7a1ce68eb978c30669896 | 9e28475402a3fc6cbd0fedbcb3253ebff1b244e3 | "2023-10-06T03:34:31Z" | python | "2023-12-01T17:38:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,756 | ["airflow/cli/cli_config.py", "airflow/cli/commands/variable_command.py", "tests/cli/commands/test_variable_command.py"] | CLI: Variables set should allow to set description | ### Body
The CLI:
`airflow variables set [-h] [-j] [-v] key VALUE`
https://airflow.apache.org/docs/apache-airflow/stable/cli-and-env-variables-ref.html#set_repeat1
Doesn't support adding description though column exists and we support it from Rest API:
https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_variables
**The Task:**
Allow to set description from cli command
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34756 | https://github.com/apache/airflow/pull/34791 | c70f298ec3ae65f510ea5b48c6568b1734b58c2d | 77ae1defd9282f7dd71a9a61cf7627162a25feb6 | "2023-10-04T14:13:52Z" | python | "2023-10-29T18:42:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,751 | ["airflow/www/templates/airflow/pool_list.html", "airflow/www/views.py", "tests/www/views/test_views_pool.py"] | Expose description columns of Pools in the UI | ### Body
In the Pools UI we don't show the description column though we do have it:
https://github.com/apache/airflow/blob/99eeb84c820c8a380721e5d40f5917a01616b943/airflow/models/pool.py#L56
and we have it in the API:
https://github.com/apache/airflow/pull/19841
The task:
Expose the column in the UI

### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34751 | https://github.com/apache/airflow/pull/34862 | 8d067129d5ba20a9847d5d70b368b3dffc42fe6e | 0583150aaca9452e02b8d15b613bfb2451b8e062 | "2023-10-04T11:00:17Z" | python | "2023-10-20T04:14:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,748 | ["airflow/providers/snowflake/provider.yaml", "docs/apache-airflow-providers-snowflake/index.rst", "generated/provider_dependencies.json"] | Upgrade the Snowflake Python Connector to version 2.7.8 or later | ### Description
As per the change made by Snowflake (affecting customers on GCP), kindly update the 'Snowflake' Python Connector version to version 2.7.8 or later.
Please note all recent versions of Snowflake SQL-alchemy connector have support for this change as they use the Python Connector more recent than above.
Here is the complete information on the change reasons and recommendations - https://community.snowflake.com/s/article/faq-2023-client-driver-deprecation-for-GCP-customers
### Use case/motivation
If this change is not made Airflow customers on GCP will not be able to perform PUT operations to their Snowflake account.
Soft Cutover enforced by Snowflake is Oct 30, 2023.
Hard Cutover enforced by Google is Jan 15, 2024
https://community.snowflake.com/s/article/faq-2023-client-driver-deprecation-for-GCP-customers
### Related issues
https://community.snowflake.com/s/article/faq-2023-client-driver-deprecation-for-GCP-customers
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34748 | https://github.com/apache/airflow/pull/35440 | 7352839e851cdbee0d15f0f8ff7ee26ed821b8e3 | a6a717385416a3468b09577dfe1d7e0702b5a0df | "2023-10-04T07:22:29Z" | python | "2023-11-04T18:43:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,740 | ["chart/templates/secrets/metadata-connection-secret.yaml", "helm_tests/other/test_keda.py"] | not using pgbouncer connection values still use pgbouncer ports/names for keda in helm chart | ### Apache Airflow version
2.7.1
### What happened
I missed the rc test window last week, sorry was out of town. When you use:
`values.keda.usePgbouncer: false`
The settings use re-use the pgbouncer port instead of port 5432 for postgres. You can work around this by overriding the:
`values.ports.pgbouncer: 5432` setting, but the database name is also incorrect and references the name of the database in the pgbouncer.ini file, which has an additional `-metadata` appended to the database connection name.
### What you think should happen instead
Not use the manipulated values for the non pgbouncer connection.
### How to reproduce
deploy the chart using the indicated values
### Operating System
gke
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34740 | https://github.com/apache/airflow/pull/34741 | 1f3525fd93554e66f6c3f2d965a0dbf6dcd82724 | 38e6607cc855f55666b817177103585f080d6173 | "2023-10-03T19:59:46Z" | python | "2023-10-07T17:53:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,726 | ["airflow/www/templates/airflow/trigger.html", "airflow/www/templates/appbuilder/dag_docs.html", "docs/apache-airflow/img/trigger-dag-tutorial-form.png"] | Hiding Run Id and Logical date from trigger DAG UI | ### Description
With the new more user friendly Trigger UI (`/dags/mydag/trigger`), one can direct more users to using Airflow UI directly.
However, the first two questions a user has to answer is **Logical date** and **Run ID**, which are very confusing and in most cases make no sense to override, even for administrators these values should be rare edge cases to override.

**Is it possible to?**
* Make these inputs opt-in on a per DAG level?
* Global config to hide them from all DAGs?
* Hide under "advanced and optional" section?
Airflow version 2.7.1
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34726 | https://github.com/apache/airflow/pull/35284 | 9990443fa154e3e1e5576b68c14fe375f0f76645 | 62bdf11fdc49c501ccf5571ab765c51363fa1cc7 | "2023-10-03T07:43:51Z" | python | "2023-11-08T22:29:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,720 | ["docs/apache-airflow/security/webserver.rst"] | AUTH_REMOTE_USER is gone | ### Apache Airflow version
2.7.1
### What happened
We upgraded from 2.6.3 to 2.7.1 and the webserver stopped working due to our config with error:
```
ImportError: cannot import name 'AUTH_REMOTE_USER' from 'airflow.www.fab_security.manager'
```
There's a commit called [Fix inheritance chain in security manager (https://github.com/apache/airflow/pull/33901)](https://github.com/apache/airflow/commit/d3ce44236895e9e1779ea39d7681b59a25af0509) which sounds suspicious around magic security imports.
### What you think should happen instead
This is [still documented](https://airflow.apache.org/docs/apache-airflow/stable/security/webserver.html#other-methods) and it wasn't noted in the changelog as removed, so it shouldn't have broken our upgrade.
### How to reproduce
it's not there, so try to import it and... it's not there.
for now I just switched it to importing directly `from flask_appbuilder.const import AUTH_DB, AUTH_LDAP, AUTH_OAUTH, AUTH_OID, AUTH_REMOTE_USER`
### Operating System
all of them
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34720 | https://github.com/apache/airflow/pull/34721 | 08bfa08273822ce18e01f70f9929130735022583 | feaa5087e6a6b89d9d3ac7eaf9872d5b626bf1ce | "2023-10-02T20:58:31Z" | python | "2023-10-04T09:36:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,623 | ["airflow/www/static/js/api/useExtraLinks.ts", "airflow/www/static/js/dag/details/taskInstance/ExtraLinks.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"] | Extra Links not refresh by the "Auto-refresh" | ### Apache Airflow version
2.7.1
### What happened
the buttons extra links are not refreshed by the "auto refresh" feature
that mean if you clear a task , and the second run is in running state , the buttons under Extra Links are still linking to the first run of the task
### What you think should happen instead
_No response_
### How to reproduce
run a task with a Extra Links like the `GlueJobOperator` wait for the finish
clear the task , wait for it to be running , click again on the Extra link it open a new tab on the first run and not the new run
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34623 | https://github.com/apache/airflow/pull/35317 | 9877f36cc0dc25cffae34322a19275acf5c83662 | be6e2cd0d42abc1b3099910c91982f31a98f4c3d | "2023-09-26T08:48:41Z" | python | "2023-11-16T15:30:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,612 | ["airflow/providers/celery/executors/celery_executor_utils.py"] | BROKER_URL_SECRET Not working as of Airflow 2.7 | ### Apache Airflow version
2.7.1
### What happened
Hi team,
In the past you can use `AIRFLOW__CELERY__BROKER_URL_SECRET` as a way to retrieve the credentials from a `SecretBackend` at runtime. However, as of Airflow 2.7, this technique appears to be broken. I believe this related to the discussion [34030 - Celery configuration elements not shown to be fetched with _CMD pattern](https://github.com/apache/airflow/discussions/34030). The irony is the pattern works when using the `config get-value` command, but does not work when using the actual `airflow celery command`. I suspect this has something to do with when the wrapper calls `ProvidersManager().initialize_providers_configuration()`.
```cmd
unset AIRFLOW__CELERY__BROKER_URL
AIRFLOW__CELERY__BROKER_URL_SECRET=broker_url_east airflow config get-value celery broker_url
```
This correct prints the secret from the backend!
```
redis://:<long password>@<my url>:6379/1
```
However actually executing celery with the same methodolgy results in the default Redis
```cmd
AIRFLOW__CELERY__BROKER_URL_SECRET=broker_url_east airflow celery worker
```
Relevant output
```
- ** ---------- [config]
- ** ---------- .> app: airflow.providers.celery.executors.celery_executor:0x7f4110be1e50
- ** ---------- .> transport: redis://redis:6379/0
```
Notice the redis/redis and default port with no password.
### What you think should happen instead
I would expect the airflow celery command to be able to leverage the `_secret` API similar to the `config` command.
### How to reproduce
You must use a secret back end to reproduce as described above. You can also do
```cmd
AIRFLOW__CELERY__BROKER_URL_CMD='/usr/bin/env bash -c "echo -n ZZZ"' airflow celery worker
```
And you will see the ZZZ is disregarded
```
- ** ---------- .> app: airflow.providers.celery.executors.celery_executor:0x7f0506d49e20
- ** ---------- .> transport: redis://redis:6379/0
```
It appears neither historical _CMD or _SECRET APIs work after the refactor to move celery to the providers.
### Operating System
ubuntu20.04
### Versions of Apache Airflow Providers
Relevant ones
apache-airflow-providers-celery 3.3.3
celery 5.3.4
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
I know this has something to do with when `ProvidersManager().initialize_providers_configuration()` is executed but I don't know the right place to put the fix.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34612 | https://github.com/apache/airflow/pull/34782 | d72131f952836a3134c90805ef7c3bcf82ea93e9 | 1ae9279346315d99e7f7c546fbcd335aa5a871cd | "2023-09-25T20:56:20Z" | python | "2023-10-17T17:58:52Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,595 | ["chart/templates/dag-processor/dag-processor-deployment.yaml", "chart/values.yaml", "helm_tests/airflow_core/test_dag_processor.py"] | helm chart doesn't support securityContext for airflow-run-airflow-migration and dag-processor init container | ### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.6.3
### Kubernetes Version
1.27
### Helm Chart configuration
helm chart doesn't support securityContext for airflow-run-airflow-migration and dag-processor init container
### Docker Image customizations
_No response_
### What happened
_No response_
### What you think should happen instead
_No response_
### How to reproduce
helm chart doesn't support securityContext for airflow-run-airflow-migration and dag-processor init container.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34595 | https://github.com/apache/airflow/pull/35593 | 1a5a272312f31ff8481b647ea1f4616af7e5b4fe | 0a93e2e28baa282e20e2a68dcb718e3708048a47 | "2023-09-25T10:47:28Z" | python | "2023-11-14T00:21:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,535 | ["airflow/www/views.py"] | Unable to retrieve logs for nested task group when parent is mapped | ### Apache Airflow version
2.7.1
### What happened
Unable to retrieve logs for task inside task group inside mapped task group.
Got `404 "TaskInstance not found"` in network requests
### What you think should happen instead
_No response_
### How to reproduce
```
from datetime import datetime
from airflow import DAG
from airflow.decorators import task, task_group
from airflow.operators.bash import BashOperator
from airflow.utils.task_group import TaskGroup
with DAG("mapped_task_group_bug", schedule=None, start_date=datetime(1970, 1, 1)):
@task
def foo():
return ["a", "b", "c"]
@task_group
def bar(x):
with TaskGroup("baz"):
# If child task group exists, logs 404
# "TaskInstance not found"
# http://localhost:8080/api/v1/dags/mapped_task_group_bug/dagRuns/manual__2023-09-21T22:31:56.863704+00:00/taskInstances/bar.baz.bop/logs/2?full_content=false
# if it is removed, logs appear
BashOperator(task_id="bop", bash_command="echo hi $x", env={"x": x})
bar.partial().expand(x=foo())
```
### Operating System
debian 11 / `astro dev start`
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34535 | https://github.com/apache/airflow/pull/34587 | 556791b13d4e4c10f95f3cb4c6079f548447e1b8 | 97916ba45ccf73185a5fbf50270a493369da0344 | "2023-09-21T22:48:50Z" | python | "2023-09-25T16:26:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,498 | ["chart/templates/dag-processor/dag-processor-deployment.yaml", "chart/values.yaml", "helm_tests/security/test_security_context.py"] | Add securityContexts in dagProcessor.logGroomerSidecar | ### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.7.1
### Kubernetes Version
1.26.7
### Helm Chart configuration
_No response_
### Docker Image customizations
_No response_
### What happened
When enabling `dagProcessor.logGroomerSidecar`, our OPA gatekeeper flags the `dag-processor-log-groomer` container with the appropriate non-root permissions. There is no way to set the `securityContexts` for this sidecar as it is not even enabled.
### What you think should happen instead
The `securityContexts` setting for the `dag-processor-log-groomer` container should be configurable.
### How to reproduce
In the Helm values, set `dagProcessor.logGroomerSidecar` to `true`.
### Anything else
This problem occurs when there are OPA policies in place pertaining to strict `securityContexts` settings.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34498 | https://github.com/apache/airflow/pull/34499 | 6393b3515fbb7aabb1613f61204686e89479a5a0 | 92cc2ffd863b8925ed785d5e8b02ac38488e835e | "2023-09-20T09:42:39Z" | python | "2023-11-29T03:00:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,455 | ["airflow/www/static/js/dag/details/graph/Node.tsx", "docs/apache-airflow/img/demo_graph_view.png"] | Graph view task name & status visibility | ### Description
I have had complaints from coworkers that it is harder to visibly see the status of airflow tasks in the new graph view. They miss the colored border from the old graph view that made it very clear what the status of a task was. They have also mentioned that names of the tasks feel a lot smaller and harder to read without zooming in.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34455 | https://github.com/apache/airflow/pull/34486 | 404666ded04d60de050c0984d113b594aee50c71 | d0ae60f77e1472585d62a3eb44d64d9da974a199 | "2023-09-18T13:40:04Z" | python | "2023-09-25T18:06:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,394 | ["docs/apache-airflow/templates-ref.rst"] | var.json.get not working correctly with nested JSON objects | ### Apache Airflow version
2.7.1
### What happened
According to [Airflow Documentation on Airflow Variables in Templates](https://airflow.apache.org/docs/apache-airflow/stable/templates-ref.html#airflow-variables-in-templates) there are two ways of accessing the JSON variables in templates:
- using the direct `{{ var.json.my_dict_var.key1 }}`
- using a getter with a default fallback value `{{{{ var.json.get('my.dict.var', {'key1': 'val1'}) }}`
However, only the first approach works for nested variables, as demonstrated in the example.
Or is it me not understanding this documentation? Alternatively it could get updated to make it clearer.
### What you think should happen instead
_No response_
### How to reproduce
The following test demonstrates the issue:
```
from datetime import datetime, timezone
import pytest
from airflow import DAG
from airflow.models.baseoperator import BaseOperator
from airflow.models.dag import DAG
from airflow.utils import db
from airflow.utils.state import DagRunState
from airflow.utils.types import DagRunType
from pytest import MonkeyPatch
from airflow.models import Variable
class TemplatedBaseOperator(BaseOperator):
template_fields = BaseOperator.template_fields + (
"templated_direct", "templated_getter")
def __init__(
self,
*args,
**kwargs,
):
self.templated_direct = "{{ var.json.somekey.somecontent }}"
self.templated_getter = "{{ var.json.get('somekey.somecontent', false) }}"
super().__init__(
*args,
**kwargs,
)
@pytest.fixture()
def reset_db():
db.resetdb()
yield
@pytest.fixture
def dag() -> DAG:
with DAG(
dag_id="templating_dag",
schedule="@daily",
start_date=datetime(2023, 1, 1, tzinfo=timezone.utc),
render_template_as_native_obj=True,
) as dag:
TemplatedBaseOperator(
task_id="templating_task"
)
return dag
def test_templating(dag: DAG, reset_db: None, monkeypatch: MonkeyPatch):
"""Test if the templated values get intialized from environment variables when rendered"""
# setting env variables
monkeypatch.setenv(
"AIRFLOW_VAR_SOMEKEY",
'{"somecontent": true}',
)
dagrun = dag.create_dagrun(
state=DagRunState.RUNNING,
execution_date=datetime(2023, 1, 1, tzinfo=timezone.utc),
data_interval=(datetime(2023, 1, 1, tzinfo=timezone.utc), datetime(2023, 1, 7, tzinfo=timezone.utc)),
start_date=datetime(2023, 1, 7, tzinfo=timezone.utc),
run_type=DagRunType.MANUAL,
)
ti = dagrun.get_task_instance(task_id="templating_task")
ti.task = dag.get_task(task_id="templating_task")
rendered_template = ti.render_templates()
assert {'somecontent': True} == Variable.get("somekey", deserialize_json=True)
assert getattr(rendered_template, "templated_direct") == True
# the following test is failing, getting default "False" instead of Variable 'True'
assert getattr(rendered_template, "templated_getter") == True
```
### Operating System
Ubuntu 22.04.3 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34394 | https://github.com/apache/airflow/pull/34411 | b23d3f964b2699d4c7f579e22d50fabc9049d1b6 | 03db0f6b785a4983c09d6eec7433cf28f7759610 | "2023-09-15T14:02:52Z" | python | "2023-09-16T18:24:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,327 | ["airflow/datasets/manager.py", "airflow/listeners/listener.py", "airflow/listeners/spec/dataset.py", "airflow/models/dag.py", "docs/apache-airflow/administration-and-deployment/listeners.rst", "tests/datasets/test_manager.py", "tests/listeners/dataset_listener.py", "tests/listeners/test_dataset_listener.py"] | Listeners for Datasets | ### Description
Add listeners for Datasets (events)
### Use case/motivation
As Airflow administrators, we would like to trigger some external processes based on all datasets being created/updated by our users. We came across the listeners for the dag runs and task instances (which are also useful), but are still missing listeners for datasets.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34327 | https://github.com/apache/airflow/pull/34418 | 31450bbe3c91246f3eedd6a808e60d5355d81171 | 9439111e739e24f0e3751350186b0e2130d2c821 | "2023-09-13T07:16:27Z" | python | "2023-11-13T14:15:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,283 | ["airflow/providers/docker/operators/docker.py", "docs/spelling_wordlist.txt", "tests/providers/docker/operators/test_docker.py"] | DockerOperator Expose `ulimits` create host config parameter | ### Description
This issue should be resolved by simply adding `ulimits` parameter to `DockerOperator` that is directly passed to [`create_host_config`](https://docker-py.readthedocs.io/en/stable/api.html#docker.api.container.ContainerApiMixin.create_host_config).
### Use case/motivation
Currently applying custom `ulimits` is not possible with `DockerOperator`, making it necessary to do some hacky entrypoint workarounds instead.
By implementing this feature, one should be able to set Ulimits directly by giving a list of [`Ulimit`](https://docker-py.readthedocs.io/en/stable/api.html#docker.types.Ulimit)-instances to `DockerOperator`.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34283 | https://github.com/apache/airflow/pull/34284 | 3fa9d46ec74ef8453fcf17fbd49280cb6fb37cef | c668245b5740279c08cbd2bda1588acd44388eb3 | "2023-09-11T21:10:57Z" | python | "2023-09-12T21:25:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,232 | ["airflow/cli/commands/connection_command.py"] | Standardise output of "airflow connections export" to those of "airflow variables export" and "airflow users export" | ### Description
The three commands:
```
airflow users export /opt/airflow/dags/users.json &&
airflow variables export /opt/airflow/dags/variables.json &&
airflow connections export /opt/airflow/dags/connections.json
```
give back:
```
7 users successfully exported to /opt/airflow/dags/users.json
36 variables successfully exported to /opt/airflow/dags/variables.json
Connections successfully exported to /opt/airflow/dags/connections.json.
```
### Use case/motivation
Standardise output of "airflow connections export" to those of "airflow variables export" and "airflow users export"
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34232 | https://github.com/apache/airflow/pull/34640 | 3189ebee181beecd5a1243a4998bc355b648dc6b | a5f5e2fc7f7b7f461458645c8826f015c1fa8d78 | "2023-09-09T07:33:50Z" | python | "2023-09-27T09:15:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,210 | ["airflow/api_connexion/endpoints/event_log_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_event_log_endpoint.py"] | Add filters to get event logs | ### Description
In the REST API, there is an endpoint to get event (audit) logs. It has sorting and pagination but not filtering. It would be useful to filter by `when` (before and after), `dag_id`, `task_id`, `owner`, `event`
<img width="891" alt="Screenshot 2023-09-08 at 1 30 35 PM" src="https://github.com/apache/airflow/assets/4600967/4280b9ed-2c73-4a9c-962c-4ede5cb140fe">
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34210 | https://github.com/apache/airflow/pull/34417 | db89a33b60f46975850e3f696a7e05e61839befc | 3189ebee181beecd5a1243a4998bc355b648dc6b | "2023-09-08T12:31:57Z" | python | "2023-09-27T07:58:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,147 | ["docs/apache-airflow/administration-and-deployment/cluster-policies.rst"] | Update docs to mark Pluggy interface as stable rather than experimental | ### What do you see as an issue?
[Pluggy interface has been listed as experimental](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/cluster-policies.html#:~:text=By%20using%20a,experimental%20feature.) since Airflow 2.6. Requesting it be listed as stable
### Solving the problem
Change the documentation to say it is now stable
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34147 | https://github.com/apache/airflow/pull/34174 | d29c3485609d28e14a032ae5256998fb75e8d49f | 88623acae867c2a9d34f5030809102379080641a | "2023-09-06T22:37:12Z" | python | "2023-09-15T06:49:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,132 | ["airflow/www/static/js/dag/Main.tsx", "airflow/www/static/js/dag/details/index.tsx"] | Always show details panel when user navigates to Graph tab | ### Description
When a user navigates to `/grid?tab=graph` we should show the details panel with the graph tab selected, even when the user hid the details panel previously.
### Use case/motivation
People already complained asking me where the graph view went after the latest Airflow upgrade (2.7.0). I can understand that this may be confusing, especially when they navigate to the graph from a DAG run link.

It would be more user friendly if we could auto-open the details panel when a user navigates to the graph tab, so they immediately get what they expect.
Happy to create the PR for this, I believe it should be just a tiny change. However, I'm opening this feature request first, just to make sure I'm not conflicting with any other plans ([AIP-39?](https://github.com/apache/airflow/projects/9)).
### Related issues
https://github.com/apache/airflow/issues/29852
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34132 | https://github.com/apache/airflow/pull/34136 | 5eea4e632c8ae50812e07b1d844ea4f52e0d6fe1 | 0b319e79ec97f6f4a8f8ce55119b6539138481cd | "2023-09-06T10:15:30Z" | python | "2023-09-07T09:58:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,092 | ["airflow/www/views.py"] | Can Delete on Task Instances permission is required to clear task | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
I was changing the roles to remove all delete access. But as soon as I remove permission "Can Delete on Task Instances" clear task takes to a blank page.
Shouldn't clear task access work without Delete access?
### What you think should happen instead
Can Delete Task Instance should not affect clear task access.
### How to reproduce
Copy the Op role and remove all Can Delete and assign that role to a user.
That user will not be able to clear task instance now.
add "Can Delete on Task Instances" permission to the role and user will now have access to clear task instances page.
### Operating System
Redhat 8
### Versions of Apache Airflow Providers
2.6.1
### Deployment
Docker-Compose
### Deployment details
airflow 2.6.1 running on Docker with postgres 13 and pgbouncer.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34092 | https://github.com/apache/airflow/pull/34123 | ba524ce4b2c80976fdb7ff710750416bc01d250d | 2f5777c082189e6495f0fea44bb9050549c0056b | "2023-09-05T00:48:13Z" | python | "2023-09-05T23:36:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,086 | [".github/workflows/ci.yml", "dev/breeze/SELECTIVE_CHECKS.md", "dev/breeze/src/airflow_breeze/utils/selective_checks.py", "dev/breeze/tests/test_selective_checks.py"] | Replace the usage of `--package-filter` with short package ids in CI | ### Body
Our CI generates `--package-filter` list in selective checks in order to pass them to `build-docs` and `publish-docs` commands. However after implementing #33876 and #34085 we could switch both commands to use short package ids which we already have (almost - there should be a slight modification for `apache-airflow`, `apache-airflow-providers` and `helm-chart` most likely. This should also simplify some of the unit tests we have.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34086 | https://github.com/apache/airflow/pull/35068 | d87ca903742cc0f75df165377e5f8d343df6dc03 | 8d067129d5ba20a9847d5d70b368b3dffc42fe6e | "2023-09-04T19:31:19Z" | python | "2023-10-20T00:01:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,085 | ["BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/release_management_commands.py", "dev/breeze/src/airflow_breeze/params/doc_build_params.py", "dev/breeze/src/airflow_breeze/utils/general_utils.py", "dev/breeze/src/airflow_breeze/utils/publish_docs_helpers.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_release-management.svg", "images/breeze/output_release-management_publish-docs.svg"] | Replace --package-filter usage for `publish-docs` breeze command with short package names | ### Body
Same as https://github.com/apache/airflow/issues/33876 but for `publish-docs` command.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/34085 | https://github.com/apache/airflow/pull/34253 | 58cce7d88b1d57e815e04e5460eaaa71173af14a | 18eed9191dbcb84cd6099a97d5ad778ac561cd4d | "2023-09-04T19:25:13Z" | python | "2023-09-10T11:45:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,069 | ["airflow/www/views.py"] | Delete record of dag run is not being logged in audit logs when clicking on delete button in list dag runs | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
When doing a list dag runs and doing check box selection and in actions delete dags is performed in audit logs we get an event with "action_muldelete".
But if instead of using check box and actions we directly click on delete button provided there is no audit log for the deleted dag run.
### What you think should happen instead
There should be atleast and identifiable event for audit in both scenarios. For one it is present but for other there is no audit.
### How to reproduce
Go to List dag runs
1. Select one dag run using check box and go to actions and delete and confirm. Check audit logs there will be an event of action_muldelete
2. Go to list dag run and click on delete button which is present beside check box and adjacent to edit record. Check audit logs there will be no event for this action.
### Operating System
Redhat 8
### Versions of Apache Airflow Providers
airflow 2.6.1 running on Docker with postgres 13 and pgbouncer.
### Deployment
Docker-Compose
### Deployment details
airflow 2.6.1 running on Docker with postgres 13 and pgbouncer.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34069 | https://github.com/apache/airflow/pull/34090 | d81fe093c266ef63d4e3f0189eb8c867bff865f4 | 988632fd67abc10375ad9fe2cbd8c9edccc609a5 | "2023-09-04T08:07:37Z" | python | "2023-09-05T07:49:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,066 | ["airflow/www/static/js/dag/details/gantt/Row.tsx", "airflow/www/static/js/dag/details/graph/utils.ts", "airflow/www/static/js/dag/grid/TaskName.test.tsx", "airflow/www/static/js/dag/grid/TaskName.tsx", "airflow/www/static/js/dag/grid/ToggleGroups.tsx", "airflow/www/static/js/dag/grid/index.test.tsx", "airflow/www/static/js/dag/grid/renderTaskRows.tsx", "airflow/www/static/js/utils/graph.ts"] | Toggling TaskGroup toggles all TaskGroups with the same label on Graph/Grid | ### Apache Airflow version
main (development)
### What happened
When you have 2 TaskGroups with the same `group_id` (nested under different parents), toggling either of them on the UI (graph or grid) toggles both.
<img src="https://cdn-std.droplr.net/files/acc_1153680/9Z1Nvs" alt="image" width="50%">
### What you think should happen instead
Only the clicked TaskGroup should be toggled. They should be distinguishable since they have the parent's group_id as prefix.
### How to reproduce
```
from datetime import datetime
from airflow.models import DAG
from airflow.operators.empty import EmptyOperator
from airflow.utils.task_group import TaskGroup
with DAG(
"my_dag",
start_date=datetime(2023, 9, 4),
):
with TaskGroup(group_id="a"):
with TaskGroup(group_id="inner"):
EmptyOperator(task_id="dummy")
with TaskGroup(group_id="b"):
with TaskGroup(group_id="inner"):
EmptyOperator(task_id="dummy")
```
### Operating System
-
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34066 | https://github.com/apache/airflow/pull/34072 | e403c74524a980030ba120c3602de0c3dc867d86 | b9acffa81bf61dcf0c5553942c52629c7f75ebe2 | "2023-09-04T07:13:16Z" | python | "2023-09-06T10:23:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,058 | ["airflow/www/views.py"] | UI Grid error when DAG has been removed | **Reproduce on main:**
- Run a DAG
- Rename the DAG
- Wait for dag file processor to remove the deleted DAG and add the new one. (Old Dag should not appear on the home page anymore)
- Go to the Browse DAGRUN page.
- Click on a DagRun of the deleted DAG, as if you want to see the 'details/grid' of that old dag run
More info and screenshots here:
https://github.com/notifications#discussioncomment-6898570

| https://github.com/apache/airflow/issues/34058 | https://github.com/apache/airflow/pull/36028 | fd0988369b3a94be01a994e46b7993e2d97b2028 | 549fac30eeefaa449df9bfdf58eb40a008e9fe75 | "2023-09-03T19:36:42Z" | python | "2023-12-03T01:11:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,023 | ["airflow/ti_deps/deps/trigger_rule_dep.py", "tests/models/test_mappedoperator.py", "tests/ti_deps/deps/test_trigger_rule_dep.py"] | Trigger Rule ONE_FAILED does not work in task group with mapped tasks | ### Apache Airflow version
2.7.0
### What happened
I have the following DAG:
```python
from __future__ import annotations
from datetime import datetime
from airflow.decorators import dag, task, task_group
from airflow.utils.trigger_rule import TriggerRule
@task
def get_records() -> list[str]:
return ["a", "b", "c"]
@task
def submit_job(record: str) -> None:
pass
@task
def fake_sensor(record: str) -> bool:
raise RuntimeError("boo")
@task
def deliver_record(record: str) -> None:
pass
@task(trigger_rule=TriggerRule.ONE_FAILED)
def handle_failed_delivery(record: str) -> None:
pass
@task_group(group_id="deliver_records")
def deliver_record_task_group(record: str):
(
submit_job(record=record)
>> fake_sensor(record=record)
>> deliver_record(record=record)
>> handle_failed_delivery(record=record)
)
@dag(
dag_id="demo_trigger_one_failed",
schedule=None,
start_date=datetime(2023, 1, 1),
)
def demo_trigger_one_failed() -> None:
records = get_records()
deliver_record_task_group.expand(record=records)
demo_trigger_one_failed()
```
- `fake_sensor` is simulating a task that raises an exception. (It could be a `@task.sensor` raising a `AirflowSensorTimeout`; it doesn't matter, the behavior is the same.)
- `handle_failed_delivery`'s `TriggerRule.ONE_FAILED` means **it is supposed to run whenever any task upstream fails.** So when `fake_sensor` fails, `handle_failed_delivery` should run.
But this does not work. `handle_failed_delivery` is skipped, and (based on the UI) it's skipped very early, before it can know if the upstream tasks have completed successfully or errored.
Here's what I see, progressively (see `How to reproduce` below for how I got this):
| started ... | skipped too early ... | fake sensor about to fail... | ... done, didn't run |
|--------|--------|--------|--------|
| <img width="312" alt="Screenshot 2023-09-01 at 3 26 49 PM" src="https://github.com/apache/airflow/assets/354655/2a9bb897-dd02-4c03-a381-2deb774d1072"> | <img width="310" alt="Screenshot 2023-09-01 at 3 26 50 PM" src="https://github.com/apache/airflow/assets/354655/11d0f8c5-c7c0-400f-95dd-4ed3992701d0"> | <img width="308" alt="Screenshot 2023-09-01 at 3 26 53 PM" src="https://github.com/apache/airflow/assets/354655/dd81e42e-ca24-45fa-a18d-df2b435c3d82"> | <img width="309" alt="Screenshot 2023-09-01 at 3 26 56 PM" src="https://github.com/apache/airflow/assets/354655/d3a3303c-91d9-498a-88c3-f1aa1e8580b6"> |
If I remove the task group and instead do,
```python
@dag(
dag_id="demo_trigger_one_failed",
schedule=None,
start_date=datetime(2023, 1, 1),
)
def demo_trigger_one_failed() -> None:
records = get_records()
(
submit_job(record=records)
>> fake_sensor.expand(record=records)
>> deliver_record.expand(record=records)
>> handle_failed_delivery.expand(record=records)
)
```
then it does the right thing:
| started ... | waiting ... | ... done, triggered correctly |
|--------|--------|--------|
| <img width="301" alt="Screenshot 2023-09-01 at 3 46 48 PM" src="https://github.com/apache/airflow/assets/354655/7e52979b-0161-4469-b284-3411a0b1b1c4"> | <img width="306" alt="Screenshot 2023-09-01 at 3 46 50 PM" src="https://github.com/apache/airflow/assets/354655/733654f3-8cb0-4181-b6b7-bad02994469d"> | <img width="304" alt="Screenshot 2023-09-01 at 3 46 53 PM" src="https://github.com/apache/airflow/assets/354655/13ffb46f-d5ca-4e7a-8d60-caad2e4a7827"> |
### What you think should happen instead
The behavior with the task group should be the same as without the task group: the `handle_failed_delivery` task with `trigger_rule=TriggerRule.ONE_FAILED` should be run when the upstream `fake_sensor` task fails.
### How to reproduce
1. Put the above DAG at a local path, `/tmp/dags/demo_trigger_one_failed.py`.
2. `docker run -it --rm --mount type=bind,source="/tmp/dags",target=/opt/airflow/dags -p 8080:8080 apache/airflow:2.7.0-python3.10 bash`
3. In the container:
```
airflow db init
airflow users create --role Admin --username airflow --email airflow --firstname airflow --lastname airflow --password airflow
airflow scheduler --daemon
airflow webserver
```
4. Open `http://localhost:8080` on the host. Login with `airflow` / `airflow`. Run the DAG.
I tested this with:
- `apache/airflow:2.6.2-python3.10`
- `apache/airflow:2.6.3-python3.10`
- `apache/airflow:2.7.0-python3.10`
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
n/a
### Deployment
Other Docker-based deployment
### Deployment details
This can be reproduced using standalone Docker images, see Repro steps above.
### Anything else
I wonder if this is related to (or fixed by?) https://github.com/apache/airflow/issues/33446 -> https://github.com/apache/airflow/pull/33732 ? (The latter was "added to the `Airflow 2.7.1` milestone 3 days ago." I can try to install that pre-release code in the container and see if it's fixed.)
_edit_: nope, [not fixed](https://github.com/apache/airflow/issues/34023#issuecomment-1703298280)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34023 | https://github.com/apache/airflow/pull/34337 | c2046245c07fdd6eb05b996cc67c203c5ac456b6 | 69938fd163045d750b8c218500d79bc89858f9c1 | "2023-09-01T19:58:24Z" | python | "2023-11-01T20:37:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,019 | ["airflow/www/static/css/main.css"] | please disable pinwheel animation as it violates ada guidelines | ### Description
_No response_
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34019 | https://github.com/apache/airflow/pull/34020 | 7bf933192d845f85abce56483e3f395247e60b68 | f8a5c8bf2b23b8a5a69b00e21ff37b58559c9dd6 | "2023-09-01T17:13:38Z" | python | "2023-09-02T07:40:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 34,005 | ["docs/conf.py"] | `version_added` field in configuration option doesn't work correctly in providers documentation | ### Apache Airflow version
2.7.0
### What happened
Initial finding: https://github.com/apache/airflow/pull/33960#discussion_r1312748153
Since [Airflow 2.7.0](https://github.com/apache/airflow/pull/32629) we have an ability to store configuration options in providers, everything works fine, except field `version_added`.
The logic around this field expect Airflow version and not Provider version. Any attempt to add in this field any value greater than current version of Airflow (2.7.0 at that moment) will result that configuration option won't rendered in documentation, seem like it not prevented to add this configuration at least `airflow config get-value` command return expected option.
### What you think should happen instead
Various, depend on final solution and decision.
_Option 1_:
In case if we would not like use this field for providers we might ignore this field in providers configurations.
For Community Providers we could always set it to `~`
_Option 2_:
Dynamically resolve depend on what a source of this configuration, Core/Provider
_Option 3_:
Add `provider_version_added` and use for show in which version of provider this configuration added
We could keep `version_added` if configuration option in provider related to Airflow Version
_Option 4_:
Suggest you own 😺
### How to reproduce
Create configuration option with `version_added` greater than current version of Airflow, for stable it is 2.7.0 for dev 2.8.0
```yaml
config:
aws:
description: This section contains settings for Amazon Web Services (AWS) integration.
options:
session_factory:
description: |
Full import path to the class which implements a custom session factory for
``boto3.session.Session``. For more details please have a look at
:ref:`howto/connection:aws:session-factory`.
default: ~
example: my_company.aws.MyCustomSessionFactory
type: string
version_added: 3.1.1
```
### Operating System
n/a
### Versions of Apache Airflow Providers
n/a
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/34005 | https://github.com/apache/airflow/pull/34011 | 04e9b0bd784e7c0045e029c6ed4ec0ac4ad6066f | 559507558b1dff591f549dc8b24092d900ffb0fa | "2023-09-01T11:52:09Z" | python | "2023-09-01T14:28:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,895 | ["setup.cfg", "setup.py"] | Search Function Not Working in Airflow UI | ### Apache Airflow version
2.7.0
### What happened
Actual Behavior:
Upon entering a keyword in the search bar, no search results are shown, and the UI remains unchanged. The search function seems to be non-responsive.
You can see in the address bar, `undefined` appears instead of valid value.
<img width="1427" alt="step-1" src="https://github.com/apache/airflow/assets/17428690/10da4758-a614-48c8-a926-6bc69f8595f1">
<img width="1433" alt="step-2" src="https://github.com/apache/airflow/assets/17428690/be8912f2-e49e-40ca-86f7-1803b516482e">
### What you think should happen instead
Expected Behavior:
When I use the search function in the Airflow UI, I expect to see a list of results that match the entered keyword. This should help me quickly locate specific DAGs or tasks within the UI.
### How to reproduce
Steps to Reproduce:
1. Log in to the Airflow UI.
2. Navigate to the "Connections" list view by clicking on the "Connections" link in the navigation menu or by directly visiting the URL: <your_airflow_ui_url>/connection/list/.
3. Attempt to use the search function by entering a keyword.
4. Observe that no search results are displayed, and the search appears to do nothing.
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Standalone Airflow via official Quickstart documentation
### Anything else
I encounter this issue since `Airflow 2.6.0`. It was fine on `Airflow 2.5.3`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33895 | https://github.com/apache/airflow/pull/33931 | 3b868421208f171dd44733c6a3376037b388bcef | ba261923d4de90d0609344843554bf7dfdab11c6 | "2023-08-29T16:59:40Z" | python | "2023-08-31T06:03:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,887 | ["setup.cfg"] | Airflow db migrate AttributeError: 'Session' object has no attribute 'scalars' | ### Apache Airflow version
2.7.0
### What happened
I try to execute airflow db migrate
```console
/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/configuration.py:751 UserWarning: Config scheduler.max_tis_per_query (value: 512) should NOT be greater than core.parallelism (value: 32). Will now use core.parallelism as the max task instances per query instead of specified value.
/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/configuration.py:857 FutureWarning: The 'log_id_template' setting in [elasticsearch] has the old default value of '{dag_id}-{task_id}-{execution_date}-{try_number}'. This value has been changed to '{dag_id}-{task_id}-{run_id}-{map_index}-{try_number}' in the running config, but please update your config before Apache Airflow 3.0.
/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/cli/cli_config.py:974 DeprecationWarning: The namespace option in [kubernetes] has been moved to the namespace option in [kubernetes_executor] - the old setting has been used, but please update your config.
DB: postgresql+psycopg2://airflow:***@localhost/airflow
Performing upgrade to the metadata database postgresql+psycopg2://airflow:***@localhost/airflow
Traceback (most recent call last):
File "/opt/miniconda3/envs/pytorch/bin/airflow", line 8, in <module>
sys.exit(main())
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/__main__.py", line 60, in main
args.func(args)
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/cli/cli_config.py", line 49, in command
return func(*args, **kwargs)
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/utils/cli.py", line 113, in wrapper
return f(*args, **kwargs)
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/utils/providers_configuration_loader.py", line 56, in wrapped_function
return func(*args, **kwargs)
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 104, in migratedb
db.upgradedb(
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/utils/db.py", line 1616, in upgradedb
for err in _check_migration_errors(session=session):
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/utils/db.py", line 1499, in _check_migration_errors
yield from check_fn(session=session)
File "/opt/miniconda3/envs/pytorch/lib/python3.8/site-packages/airflow/utils/db.py", line 979, in check_conn_id_duplicates
dups = session.scalars(
AttributeError: 'Session' object has no attribute 'scalars'
```
### What you think should happen instead
_No response_
### How to reproduce
Execute airflow db migrate with PostgresSql existing database
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33887 | https://github.com/apache/airflow/pull/33892 | fe27031382e2034b59a23db1c6b9bdbfef259137 | bfab7daffedd189a85214165cfc34944e2bf11c1 | "2023-08-29T12:56:49Z" | python | "2023-08-29T17:07:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,876 | ["BREEZE.rst", "dev/README_RELEASE_PROVIDER_PACKAGES.md", "dev/breeze/src/airflow_breeze/commands/developer_commands.py", "dev/breeze/src/airflow_breeze/params/doc_build_params.py", "docs/README.rst", "docs/build_docs.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_build-docs.svg"] | Replace `--package-filter` usage for docs breeze command with short package names | ### Body
The `--package-filter` while nice in theory to specify which packages to build, has quite bad UX (lots of repetitions when specifying multiple packages, long package names. We practically (except `--package-filter apache-airflow-providers-*` never use the functionality of the filter with glob patterns.
It's much more practical to use "short" package names ("apache.hdfs" rather that `--package-filter apache-airflow-providers-apache-hdfs` and we already use it in a few places in Breeze.
We should likely replace all the places when we use `--package-filter` with those short names, add a special alias for `all-providers` and this should help our users who build documentation and release manager to do their work faster and nicer.
This would also allow to remove the separate ./dev/provider_packages/publish_provider_documentation.sh bash script that is aimed to do somethign similar in a "hacky way".
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/33876 | https://github.com/apache/airflow/pull/34004 | 3d27504a6232cacb12a9e3dc5837513e558bd52b | e4b3c9e54481d2a6e2de75f73130a321e1ba426c | "2023-08-29T10:23:24Z" | python | "2023-09-04T19:23:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,850 | ["airflow/providers/google/cloud/transfers/azure_fileshare_to_gcs.py", "airflow/providers/microsoft/azure/CHANGELOG.rst", "airflow/providers/microsoft/azure/hooks/fileshare.py", "airflow/providers/microsoft/azure/provider.yaml", "docs/apache-airflow-providers-microsoft-azure/connections/azure_fileshare.rst", "generated/provider_dependencies.json", "tests/providers/google/cloud/transfers/test_azure_fileshare_to_gcs.py", "tests/providers/microsoft/azure/hooks/test_azure_fileshare.py", "tests/test_utils/azure_system_helpers.py"] | Upgrade Azure File Share to v12 | ### Description
In November 2019 the Azure File Share python package was "renamed from `azure-storage-file` to `azure-storage-file-share` along with renamed client modules":
https://azure.github.io/azure-sdk/releases/2019-11/python.html
Yet it is 2023 and we still have `azure-storage-file>=2.1.0` as a dependency for `apache-airflow-providers-microsoft-azure`. I am opening this issue to propose removing this over three year old deprecated package.
I am aware of the challenges with earlier attempts to upgrade Azure Storage packages to v12 as discussed in https://github.com/apache/airflow/pull/8184. I hope those challenges are gone by now? Especially since `azure-storage-blob` already has been upgraded to v12 in this provider (https://github.com/apache/airflow/pull/12188).
Also,
I believe this is why `azure-storage-common>=2.1.0` is also still a dependency. Which is listed as deprecated on https://azure.github.io/azure-sdk/releases/deprecated/python.html:
- I have not fully investigated but I believe it is possible once we upgrade to `azure-storage-file-share` v12 this provider will no longer need `azure-storage-common` as a dependency as as it just contains the common code shared by the old 2.x versions of "blob", "file" and "queue". We already upgraded "blob" to v12, and we don't have "queue" support, so "file" is the last remaining.
- Also removing "azure-storage-common" will remove the warning:
```
... site-packages/azure/storage/common/_connection.py:82 SyntaxWarning: "is" with a literal. Did you mean "=="?
```
(a fix was merged to "main" in 2020 however Microsoft no longer will release new versions of this package)
I _used_ to be an active Azure Storage user (up until last year), but I am now mainly an AWS user, so I would appreciate it if someone else will create a PR for this, but if nobody does I suppose I could look into it.
### Use case/motivation
Mainly to remove deprecated packages, and secondly to remove one SyntaxWarning
### Related issues
This is the related issue to upgrade `azure-storage-blob` to v12: https://github.com/apache/airflow/issues/11968
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33850 | https://github.com/apache/airflow/pull/33904 | caf135f7c40ff07b31a9a026282695ac6202e6aa | b7f84e913b6aa4cee7fa63009082b0608b3a0bf1 | "2023-08-28T20:08:20Z" | python | "2023-09-02T12:15:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,744 | ["airflow/providers/redis/provider.yaml", "generated/provider_dependencies.json"] | Celery Executor is not working with redis-py 5.0.0 | ### Apache Airflow version
2.7.0
### What happened
After upgrading to Airflow 2.7.0 in my local environment my Airflow DAGs won't run with Celery Executor using Redis even after changing `celery_app_name` configuration in `celery` section from `airflow.executors.celery_executor` to `airflow.providers.celery.executors.celery_executor`.
I see the error actually is unrelated to the recent Airflow Celery provider changes, but is related to Celery's Redis support. What is happening is Airflow fails to send jobs to the worker as the Kombu module is not compatible with Redis 5.0.0 (released last week). It gives this error (I will update this to the full traceback once I can reproduce this error one more time):
```
AttributeError: module 'redis' has no attribute 'client'
```
Celery actually is limiting redis-py to 4.x in an upcoming version of Celery 5.3.x (it is merged to main on August 17, 2023 but it is not yet released: https://github.com/celery/celery/pull/8442 . The latest version is v5.3.1 released on June 18, 2023).
Kombu is also going to match Celery and limit redis-py to 4.x in an upcoming version as well (the PR is draft, I am assuming they are waiting for the Celery change to be released: https://github.com/celery/kombu/pull/1776)
For now there is not really a way to fix this unless there is a way we can do a redis constraint to avoid 5.x. Or maybe once the next Celery 5.3.x release includes limiting redis-py to 4.x we can possibly limit Celery provider to that version of Celery?
### What you think should happen instead
Airflow should be able to send jobs to workers when using Celery Executor with Redis
### How to reproduce
1. Start Airflow 2.7.0 with Celery Executor with Redis 5.0.0 installed by default (at the time of this writing)
2. Run a DAG task
3. The scheduler fails to send the job to the worker
Workaround:
1. Limit redis-py to 4.x the same way the upcoming release of Celery 5.3.x does, by using this in requirements.txt: `redis>=4.5.2,<5.0.0,!=4.5.5`
2. Start Airflow 2.7.0 with Celery Executor
3. Run a DAG task
4. The task runs successfully
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-celery==3.3.2
### Deployment
Docker-Compose
### Deployment details
I am using `bitnami/airflow:2.7.0` image in Docker Compose when I first encountered this issue, but I will test with Breeze as well shortly and then update this issue.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33744 | https://github.com/apache/airflow/pull/33773 | 42bc8fcb6bab2b02ef2ff62c3015b54a1ad2df62 | 3ba994d8f4c4b5ce3828bebcff28bbfc25170004 | "2023-08-25T20:06:08Z" | python | "2023-08-26T16:02:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,699 | ["airflow/www/static/js/dag/grid/index.tsx"] | Scrolling issues on DAG page | ### Apache Airflow version
2.7.0
### What happened
When on a DAG page, there's an issue with scrolling behavior on the Grid and Gantt tabs:
While my pointer is over the grid, the entire page should scroll once you get to the bottom of the grid, but instead I cannot scroll any further. This means that not only can't I get to the bottom of the page (where the Aiflow version, etc., is), but I can't even see the bottom of the grid if there are enough rows.
Details, Graph, and Code tabs scroll fine.
Important to note - this seems to only happen when there are enough DAG runs to require horizontal scrolling to be activated.
### What you think should happen instead
Instead of stopping here:

I should be able to scroll all the way down to here:

### How to reproduce
You should be able to see this behavior with any DAG that has enough DAG runs to cause the horizontal scroll bar to appear. I was able to replicate it with this DAG and triggering it 10 times. I have the vertical divider moved almost all the way to the left.
```
from airflow import DAG
from airflow.operators.empty import EmptyOperator
from datetime import datetime
with DAG(
dag_id='bad_scrolling',
default_args={'start_date': datetime(2023, 8, 23, 14, 0, 0)},
) as dag:
t1 = EmptyOperator(
task_id='fairly_long_name_here'
)
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-celery==3.3.2
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-common-sql==1.7.0
apache-airflow-providers-datadog==3.3.1
apache-airflow-providers-elasticsearch==5.0.0
apache-airflow-providers-ftp==3.5.0
apache-airflow-providers-google==10.6.0
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-microsoft-azure==6.2.4
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-redis==3.3.1
apache-airflow-providers-salesforce==5.4.1
apache-airflow-providers-slack==7.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.4.3
```
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33699 | https://github.com/apache/airflow/pull/35717 | 4f060a482c3233504e7905b3ab2d00fe56ea43cd | d37b91c102856e62322450606474aebd74ddf376 | "2023-08-24T16:48:17Z" | python | "2023-11-28T21:37:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,698 | ["airflow/www/views.py"] | UI DAG counts including deleted DAGs | ### Apache Airflow version
2.7.0
### What happened
On the DAGs page, the All, Active, and Paused counts include deleted DAGs. This is different from <= 2.6.1 (at least), where they were not included in the totals. Specifically this is for DAGs for which the DAG files have been removed, not DAGs that have been deleted via the UI.
### What you think should happen instead
Including deleted DAGs in those counts is confusing, and this behavior should revert to the previous behavior.
### How to reproduce
Create a DAG.
Wait for totals to increment.
Remove the DAG file.
The totals will not change.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-celery==3.3.2
apache-airflow-providers-cncf-kubernetes==7.4.2
apache-airflow-providers-common-sql==1.7.0
apache-airflow-providers-datadog==3.3.1
apache-airflow-providers-elasticsearch==5.0.0
apache-airflow-providers-ftp==3.5.0
apache-airflow-providers-google==10.6.0
apache-airflow-providers-http==4.5.0
apache-airflow-providers-imap==3.3.0
apache-airflow-providers-microsoft-azure==6.2.4
apache-airflow-providers-postgres==5.6.0
apache-airflow-providers-redis==3.3.1
apache-airflow-providers-salesforce==5.4.1
apache-airflow-providers-slack==7.3.2
apache-airflow-providers-snowflake==4.4.2
apache-airflow-providers-sqlite==3.4.3
```
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
I suspect the issue is with [DagModel.deactivate_deleted_dags](https://github.com/apache/airflow/blob/f971ba2f2f9703d0e1954e52aaded52a83c2f844/airflow/models/dag.py#L3564), but I'm unable to verify.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33698 | https://github.com/apache/airflow/pull/33778 | 02af225e7b75552e074d7dfcfc1af5336c42b84d | 64948fa7824d004e65089c2d159c5e6074727826 | "2023-08-24T16:01:16Z" | python | "2023-08-27T17:02:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,693 | ["airflow/www/static/js/dag/details/graph/Node.tsx"] | Long custom operator name overflows in graph view | ### Apache Airflow version
main (development)
### What happened
1. There was support added to configure UI elements in graph view in https://github.com/apache/airflow/issues/31949
2. Long custom operator names overflow out of the box. Meanwhile long task id are truncated with ellipsis. I guess same could be done by removing width attribute that has "fit-content" and "noOfLines" should be added.
Originally wrapped before commit :

main :

### What you think should happen instead
_No response_
### How to reproduce
Sample dag to reproduce the issue in UI
```python
from datetime import datetime
from airflow.decorators import dag, task
from airflow.models.baseoperator import BaseOperator
from airflow.operators.bash import BashOperator
class HelloOperator(BashOperator):
custom_operator_name = "SampleLongNameOfOperator123456789"
@dag(dag_id="gh32757", start_date=datetime(2023, 1, 1), catchup=False)
def mydag():
bash = BashOperator(task_id="t1", bash_command="echo hello")
hello = HelloOperator(task_id="SampleLongTaskId1234567891234890", bash_command="echo test")
bash2 = BashOperator(task_id="t3", bash_command="echo bye")
bash >> hello >> bash2
mydag()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33693 | https://github.com/apache/airflow/pull/35382 | aaed909344b12aa4691a9e23ea9f9c98d641d853 | 4d872b87efac9950f125aff676b30f0a637b471e | "2023-08-24T13:00:06Z" | python | "2023-11-17T20:32:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,679 | ["airflow/providers/snowflake/operators/snowflake.py"] | SnowflakeCheckOperator connection id template issue | ### Apache Airflow version
2.7.0
### What happened
When upgrading to apache-airflow-providers-snowflake==4.4.2, our SnowflakeCheckOperators are all failing with similar messages. The affected code seems to be from [this PR](https://github.com/apache/airflow/pull/30784).
Code:
```
check_order_load = SnowflakeCheckOperator(
task_id="check_row_count",
sql='check_orders_load.sql',
snowflake_conn_id=SF_CONNECTION_ID,
)
```
Errors:
```
[2023-08-23, 20:58:23 UTC] {taskinstance.py:1943} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/airflow/models/abstractoperator.py", line 664, in _do_render_template_fields
value = getattr(parent, attr_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'SnowflakeCheckOperator' object has no attribute 'snowflake_conn_id'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 1518, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode, session=session)
File "/usr/local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 1646, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/models/taskinstance.py", line 2291, in render_templates
original_task.render_template_fields(context)
File "/usr/local/lib/python3.11/site-packages/airflow/models/baseoperator.py", line 1244, in render_template_fields
self._do_render_template_fields(self, self.template_fields, context, jinja_env, set())
File "/usr/local/lib/python3.11/site-packages/airflow/utils/session.py", line 77, in wrapper
return func(*args, session=session, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/airflow/models/abstractoperator.py", line 666, in _do_render_template_fields
raise AttributeError(
AttributeError: 'snowflake_conn_id' is configured as a template field but SnowflakeCheckOperator does not have this attribute.```
```
### What you think should happen instead
This works fine in apache-airflow-providers-snowflake==4.4.1 - no errors.
### How to reproduce
With `apache-airflow-providers-snowflake==4.4.2`
Try running this code:
```
from airflow.providers.snowflake.operators.snowflake import SnowflakeCheckOperator
check_task = SnowflakeCheckOperator(
task_id='check_gmv_yoy',
sql='select 1',
snowflake_conn_id='NAME_OF_CONNECTION_ID',
)
```
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-snowflake==4.4.2
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
This happens every time with 4.4.2, never with <= 4.4.1.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33679 | https://github.com/apache/airflow/pull/33681 | 2dbb9633240777d658031d32217255849150684b | d06c14f52757321f2049bb54212421f68bf3ed06 | "2023-08-23T22:02:11Z" | python | "2023-08-24T07:22:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,667 | ["airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | Google Cloud Dataproc cluster creation should eagerly delete ERROR state clusters. | ### Description
Google Cloud Dataproc cluster creation should eagerly delete ERROR state clusters.
It is possible for Google Cloud Dataproc clusters to create in the ERROR state. The current operator (DataprocCreateClusterOperator) will require three total task attempts (original + two retries) in order to create the cluster, assuming underlying GCE infrastructure resolves itself between task attempts. This can be reduced to two total attempts by eagerly deleting a cluster in ERROR state before failing the current task attempt.
Clusters in the ERROR state are not useable to submit Dataproc based jobs via the Dataproc API.
### Use case/motivation
Reducing the number of task attempts can reduce GCP based cost as delays between retry attempts could be minutes. There's no reason to keep a running, costly cluster in the ERROR state if it can be detected in the initial create task.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33667 | https://github.com/apache/airflow/pull/33668 | 075afe5a2add74d9e4e9fd57768b8354489cdb2b | d361761deeffe628f3c17ab0debd0e11515c22da | "2023-08-23T18:21:03Z" | python | "2023-08-30T05:29:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,661 | ["airflow/jobs/scheduler_job_runner.py", "airflow/utils/state.py", "tests/jobs/test_scheduler_job.py"] | Zombie tasks in RESTARTING state are not cleaned | ### Apache Airflow version
2.7.0
Also reproduced on 2.5.0
### What happened
Recently we added some automation to restarting Airflow tasks with "clear" command so we use this feature a lot. We often clear tasks in RUNNING state, which means that they go into RESTARTING state. We noticed that a lot of those tasks get stuck in RESTARTING state. Our Airflow infrastructure runs in an environment where any process can get suddenly killed without graceful shutdown.
We run Airflow on GKE but I managed to reproduce this behaviour on local environment with SequentialExecutor. See **"How to reproduce"** below for details.
### What you think should happen instead
Tasks should get cleaned after scheduler restart and eventually get scheduled and executed.
### How to reproduce
After some code investigation, I reproduced this kind of behaviour on local environment and it seems that RESTARTING tasks are only properly handled if the original restarting task is gracefully shut down so it can mark task as UP_FOR_RETRY or at least there is a healthy scheduler to do it if they fail for any other reason. The problem is with the following scenario:
1. Task is initially in RUNNING state.
2. Scheduler process dies suddenly.
3. The task process also dies suddenly.
4. "clear" command is executed on the task so the state is changed to RESTARTING state by webserver process.
5. From now on, even if we restart scheduler, the task will never get scheduled or change its state. It needs to have its state manually fixed, e.g. by clearing it again.
A recording of steps to reproduce on local environment:
https://vimeo.com/857192666?share=copy
### Operating System
MacOS Ventura 13.4.1
### Versions of Apache Airflow Providers
N/A
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
N/A
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33661 | https://github.com/apache/airflow/pull/33706 | 3f984edd0009ad4e3177a3c95351c563a6ac00da | 5c35786ca29aa53ec08232502fc8a16fb1ef847a | "2023-08-23T15:41:09Z" | python | "2023-08-24T23:55:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,577 | ["airflow/providers/celery/provider.yaml", "dev/breeze/src/airflow_breeze/utils/path_utils.py", "generated/provider_dependencies.json", "setup.cfg", "setup.py"] | Airflow 2.7 is incompatible with SodaCore versions 3.0.24 and beyond | ### Apache Airflow version
2.7.0
### What happened
When trying to install SodaCore on Airflow 2.7, the following error is received due to a conflict with `opentelemetry-api`.
```
ERROR: Cannot install apache-airflow==2.7.0 and soda-core==3.0.48 because these package versions have conflicting dependencies.
The conflict is caused by:
apache-airflow 2.7.0 depends on opentelemetry-api==1.15.0
soda-core 3.0.48 depends on opentelemetry-api~=1.16.0
```
SodaCore has depended on `opentelemetry-api~=1.16.0` ever since v[3.0.24](https://github.com/sodadata/soda-core/releases/tag/v3.0.24).
### What you think should happen instead
Airflow needs to support versions of `opentelemetry-api` 1.16.x.
### How to reproduce
Simply running the following commands to install the two packages should reproduce the error.
```
$ python3 -m venv /tmp/soda
$ /tmp/soda/bin/pip install apache-airflow==2.7.0 soda-core-bigquery==3.0.48
```
### Operating System
n/a
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33577 | https://github.com/apache/airflow/pull/33579 | 73a37333918abe0612120d95169b9e377274810b | ae25a52ae342c9e0bc3afdb21d613447c3687f6c | "2023-08-21T12:10:44Z" | python | "2023-08-21T15:49:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,497 | ["airflow/www/jest-setup.js", "airflow/www/static/js/cluster-activity/live-metrics/Health.tsx", "airflow/www/static/js/index.d.ts", "airflow/www/templates/airflow/cluster_activity.html", "airflow/www/views.py"] | DAG Processor should not be visible in the Cluster Activity Page if there is no stand alone processor | ### Apache Airflow version
2.7.0rc2
### What happened
In the Airflow UI, currently, the DAG Processor is visible in the Cluster Activity page even if there is no stand-alone dag processor.
### What you think should happen instead
It should be hidden if there is no stand-alone dag processor.
### How to reproduce
Run Airflow 2.7
### Operating System
Mac OS
### Versions of Apache Airflow Providers
Airflow > 2.7
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33497 | https://github.com/apache/airflow/pull/33611 | b6318ffabce8cc3fdb02c30842726476b7e1fcca | c055e1da0b50e98820ffff8f8d10d0882f753384 | "2023-08-18T15:59:33Z" | python | "2023-09-02T13:56:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,482 | ["airflow/api_connexion/endpoints/dag_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/dag_schema.py", "airflow/models/dag.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_dag_endpoint.py", "tests/api_connexion/schemas/test_dag_schema.py"] | The `/dags/{dag_id}/details` endpoint returns less data than is documented | ### What do you see as an issue?
The [/dags/{dag_id}/details](https://airflow.apache.org/docs/apache-airflow/stable/stable-rest-api-ref.html#operation/post_set_task_instances_state) endpoint of the REST API does not return all of the keys that are listed in the documentation. If I run `curl -X GET localhost:8080/api/v1/dags/{my_dag}/details`, then compare the results with the results in the documentation, you can see the following missing keys:
```python
>>> for key in docs.keys():
... if not key in actual.keys():
... print(key)
...
root_dag_id
last_parsed_time
last_pickled
last_expired
scheduler_lock
timetable_description
has_task_concurrency_limits
has_import_errors
next_dagrun
next_dagrun_data_interval_start
next_dagrun_data_interval_end
next_dagrun_create_after
template_search_path
```
### Solving the problem
Either remove these keys from the documentation or fix the API endpoint
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33482 | https://github.com/apache/airflow/pull/34947 | 0e157b38a3e44b5a6fc084c581a025434a97a4c0 | e8f62e8ee56519459d8282dadb1d8c198ea5b9f5 | "2023-08-17T19:15:44Z" | python | "2023-11-24T09:47:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,478 | ["airflow/www/views.py"] | Rendered template malfunction when `execution_date` parameter is malformed | null | https://github.com/apache/airflow/issues/33478 | https://github.com/apache/airflow/pull/33516 | 533afb5128383958889bc653226f46947c642351 | d9814eb3a2fc1dbbb885a0a2c1b7a23ce1cfa148 | "2023-08-17T16:46:41Z" | python | "2023-08-19T16:03:39Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,461 | ["airflow/providers/amazon/aws/waiters/appflow.json"] | AppflowHook with wait_for_completion = True does not finish executing the task although the appflow flow does. | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
I'm using airflow 2.6.2 with apache-airflow-providers-amazon 8.5.1
When I use AppflowHook with the wait_for_completion parameter set to True the task execution never finishes.
I have checked in Appflow and the flow executes correctly and finishes in a couple of seconds, however, AppflowHook does not finish responding.
If I change wait_for_completion to False, everything works correctly.
The logs show a "403 FORBIDDEN" error and marking the task as success or failed fixes the logs.
**Logs during task execution:**
```console
470b2412b735
*** Found local files:
*** * /opt/airflow/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe/attempt=1.log ***
!!!! Please make sure that all your Airflow components (e.g. schedulers, webservers, workers and triggerer) have the same 'secret_key' configured in 'webserver' section and time is synchronized on all your machines (for example with ntpd) See more at https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key
*** Could not read served logs: Client error '403 FORBIDDEN' for url 'http://470b2412b735:8793/log/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe/attempt=1.log' For more information check: https://httpstatuses.com/403
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00
[queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00
[queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1308} INFO - Starting attempt 1 of 1
[2023-08-16, 19:04:44 CST] {taskinstance.py:1327} INFO - Executing <Task(_PythonDecoratedOperator): extract_from_stripe> on 2023-08-17 01:04:41.723261+00:00
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:57} INFO - Started process 796 to run task
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:84} INFO - Running:
['***', 'tasks', 'run', 'stripe_ingest_flow', 'extract_from_stripe', 'manual__2023-08-17T01:04:41.723261+00:00', '--job-id', '903', '--raw', '--subdir', 'DAGS_FOLDER/stripe_ingest_flow_to_lakehouse/dag.py', '--cfg-path', '/tmp/tmpqz8uvben']
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:85} INFO - Job 903: Subtask extract_from_stripe
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {task_command.py:410} INFO - Running <TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00
[running]> on host 470b2412b735
[2023-08-16, 19:04:44 CST] {taskinstance.py:1545} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='dhernandez' AIRFLOW_CTX_DAG_ID='stripe_ingest_flow' AIRFLOW_CTX_TASK_ID='extract_from_stripe' AIRFLOW_CTX_EXECUTION_DATE='2023-08-17T01:04:41.723261+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-08-17T01:04:41.723261+00:00'
[2023-08-16, 19:04:44 CST] {crypto.py:83} WARNING - empty cryptography key - values will not be stored encrypted.
[2023-08-16, 19:04:44 CST] {base.py:73} INFO - Using connection ID 'siclo_***_lakehouse_conn' for task execution.
[2023-08-16, 19:04:44 CST] {connection_wrapper.py:340} INFO - AWS Connection (conn_id='siclo_***_lakehouse_conn', conn_type='aws') credentials retrieved from login and password.
[2023-08-16, 19:04:45 CST] {appflow.py:63} INFO - executionId: 58ad6275-0a70-48d9-8414-f0215924c876
```
**Logs when marking the task as success or failed**
```console
470b2412b735
*** Found local files:
*** * /opt/airflow/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe/attempt=1.log
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00 [queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1103} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00 [queued]>
[2023-08-16, 19:04:44 CST] {taskinstance.py:1308} INFO - Starting attempt 1 of 1
[2023-08-16, 19:04:44 CST] {taskinstance.py:1327} INFO - Executing <Task(_PythonDecoratedOperator): extract_from_stripe> on 2023-08-17 01:04:41.723261+00:00
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:57} INFO - Started process 796 to run task
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:84} INFO - Running: ['***', 'tasks', 'run', 'stripe_ingest_flow', 'extract_from_stripe', 'manual__2023-08-17T01:04:41.723261+00:00', '--job-id', '903', '--raw', '--subdir', 'DAGS_FOLDER/stripe_ingest_flow_to_lakehouse/dag.py', '--cfg-path', '/tmp/tmpqz8uvben']
[2023-08-16, 19:04:44 CST] {standard_task_runner.py:85} INFO - Job 903: Subtask extract_from_stripe
[2023-08-16, 19:04:44 CST] {logging_mixin.py:149} INFO - Changing /opt/***/logs/dag_id=stripe_ingest_flow/run_id=manual__2023-08-17T01:04:41.723261+00:00/task_id=extract_from_stripe permission to 509
[2023-08-16, 19:04:44 CST] {task_command.py:410} INFO - Running <TaskInstance: stripe_ingest_flow.extract_from_stripe manual__2023-08-17T01:04:41.723261+00:00 [running]> on host 470b2412b735
[2023-08-16, 19:04:44 CST] {taskinstance.py:1545} INFO - Exporting env vars: AIRFLOW_CTX_DAG_OWNER='dhernandez' AIRFLOW_CTX_DAG_ID='stripe_ingest_flow' AIRFLOW_CTX_TASK_ID='extract_from_stripe' AIRFLOW_CTX_EXECUTION_DATE='2023-08-17T01:04:41.723261+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-08-17T01:04:41.723261+00:00'
[2023-08-16, 19:04:44 CST] {crypto.py:83} WARNING - empty cryptography key - values will not be stored encrypted.
[2023-08-16, 19:04:44 CST] {base.py:73} INFO - Using connection ID 'siclo_***_lakehouse_conn' for task execution.
[2023-08-16, 19:04:44 CST] {connection_wrapper.py:340} INFO - AWS Connection (conn_id='siclo_***_lakehouse_conn', conn_type='aws') credentials retrieved from login and password.
[2023-08-16, 19:04:45 CST] {appflow.py:63} INFO - executionId: 58ad6275-0a70-48d9-8414-f0215924c876
[2023-08-16, 19:05:24 CST] {local_task_job_runner.py:291} WARNING - State of this instance has been externally set to failed. Terminating instance.
[2023-08-16, 19:05:24 CST] {process_utils.py:131} INFO - Sending 15 to group 796. PIDs of all processes in the group: [796]
[2023-08-16, 19:05:24 CST] {process_utils.py:86} INFO - Sending the signal 15 to group 796
[2023-08-16, 19:05:24 CST] {taskinstance.py:1517} ERROR - Received SIGTERM. Terminating subprocesses.
```
### What you think should happen instead
That having wait_for_completion set to True, the task finishes successfully and retrieves the execution id from appflow.
### How to reproduce
With a dag that has the following task
```python
@task
def extract():
appflow = AppflowHook(
aws_conn_id='conn_id'
)
execution_id = appflow.run_flow(
flow_name='flow_name',
wait_for_completion=True
# with wait_for_completion=False if it works
)
return execution_id
```
The aws connection has the following permissions
- "appflow:DescribeFlow",
- "appflow:StartFlow",
- "appflow:RunFlow",
- "appflow:ListFlows",
- "appflow:DescribeFlowExecutionRecords"
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow==2.6.2
apache-airflow-providers-amazon==8.5.1
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-http==4.4.1
boto3==1.26.76
asgiref==3.7.2
watchtower==2.0.1
jsonpath-ng==1.5.3
redshift-connector==2.0.911
sqlalchemy-redshift==0.8.14
mypy-boto3-appflow==1.28.16
mypy-boto3-rds==1.26.144
mypy-boto3-redshift-data==1.26.109
mypy-boto3-s3==1.26.153
celery==5.3.0
```
### Deployment
Docker-Compose
### Deployment details
Docker 4.10.1 (82475)
Airflow image apache/airflow:2.6.2-python3.11
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33461 | https://github.com/apache/airflow/pull/33613 | 2363fb562db1abaa5bc3bc93b67c96e018c1d78a | 41d9be072abacc47393f700aa8fb98bc2b9a3713 | "2023-08-17T01:47:56Z" | python | "2023-08-22T15:31:02Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,446 | ["airflow/utils/task_group.py", "tests/decorators/test_task_group.py"] | Task group gets marked as upstream_failed when dynamically mapped with expand_kwargs even though all upstream tasks were skipped or successfully finished. | ### Apache Airflow version
2.6.3
### What happened
I am writing a DAG that transfers data from MSSQL to BigQuery, The part of the ETL process that actually fetches the data from MSSQL and moves it to BQ needs to parallelized.
I am trying to write it as a task group where the first task moves data from MSSQL to GCS, and the 2nd task loads the file into BQ.
for some odd reason when I expand the task group it is automatically marked as upstream_failed , at the very first moment the DAG is triggered.
I have tested this with a simple dag (provided below) as well and the bug was reproduced.
I found a similar issue [here](https://github.com/apache/airflow/issues/27449) but the bug seems to persist even after configuring `AIRFLOW__SCHEDULER__SCHEDULE_AFTER_TASK_EXECUTION=False`
### What you think should happen instead
The task group should be dynamically expanded **after all upstream tasks have finished** since `expand_kwargs` needs the previous task's output.
### How to reproduce
```from datetime import timedelta
from airflow.decorators import dag, task, task_group
from airflow.operators.bash import BashOperator
from pendulum import datetime
@dag(
dag_id="example_task_group_expansion",
schedule="@once",
default_args={
"depends_on_past": False,
"email": ["airflow@example.com"],
"email_on_failure": True,
"email_on_retry": True,
"retries": 0,
"retry_delay": timedelta(minutes=5),
},
start_date=datetime(2023, 8, 1),
catchup=False,
)
def example_dag():
@task(task_id="TaskDistributer")
def task_distributer():
step = 10_000
return [dict(interval_start=i, interval_end=i + step) for i in range(0, 100_000, step)]
@task_group(group_id="tg1")
def tg(interval_start, interval_end):
task1 = BashOperator(
task_id="task1",
bash_command="echo $interval_start -- $interval_end",
env={"interval_start": str(interval_start), "interval_end": str(interval_end)},
)
task2 = BashOperator(
task_id="task2",
bash_command="echo $interval_start -- $interval_end",
env={"interval_start": str(interval_start), "interval_end": str(interval_end)},
)
task1 >> task2
return task2
tg.expand_kwargs(task_distributer())
example_dag()
```
### Operating System
MacOS 13.4.1
### Versions of Apache Airflow Providers
No providers needed to reproduce
### Deployment
Docker-Compose
### Deployment details
Docker-compose
Airflow image: apache/airflow:2.6.3-python3.9
Executor: Celery
Messaging queue: redis
Metadata DB: MySQL 5.7
### Anything else
The problem occurs every time.
Here are some of the scheduler logs that may be relevant.
```
docker logs 3d4e47791238 | grep example_task_group_expansion
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
/usr/local/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:189 DeprecationWarning: The '[celery] stalled_task_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead.
[2023-08-16 14:09:33 +0000] [15] [INFO] Starting gunicorn 20.1.0
[2023-08-16 14:09:33 +0000] [15] [INFO] Listening at: http://[::]:8793 (15)
[2023-08-16 14:09:33 +0000] [15] [INFO] Using worker: sync
[2023-08-16 14:09:33 +0000] [16] [INFO] Booting worker with pid: 16
[2023-08-16 14:09:33 +0000] [17] [INFO] Booting worker with pid: 17
[2023-08-16T14:10:04.870+0000] {dag.py:3504} INFO - Setting next_dagrun for example_task_group_expansion to None, run_after=None
[2023-08-16T14:10:04.881+0000] {scheduler_job_runner.py:1449} DEBUG - DAG example_task_group_expansion not changed structure, skipping dagrun.verify_integrity
[2023-08-16T14:10:04.883+0000] {dagrun.py:711} DEBUG - number of tis tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:04.883+0000] {dagrun.py:732} DEBUG - number of scheduleable tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:04.883+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1103} DEBUG - Dependencies all met for dep_context=None ti=<TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [None]>
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Trigger Rule' PASSED: True, The task instance did not have any upstream tasks.
[2023-08-16T14:10:04.884+0000] {taskinstance.py:1103} DEBUG - Dependencies all met for dep_context=None ti=<TaskInstance: example_task_group_expansion.tg1.task1 scheduled__2023-08-01T00:00:00+00:00 [None]>
[2023-08-16T14:10:04.895+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.895+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.897+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Trigger Rule' PASSED: False, Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=0, removed=0, done=0), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.897+0000] {taskinstance.py:1093} DEBUG - Dependencies not met for <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=0, removed=0, done=0), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.902+0000] {scheduler_job_runner.py:1476} DEBUG - Skipping SLA check for <DAG: example_task_group_expansion> because no tasks in DAG have SLAs
<TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [scheduled]>
[2023-08-16T14:10:04.910+0000] {scheduler_job_runner.py:476} INFO - DAG example_task_group_expansion has 0/16 running and queued tasks
<TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [scheduled]>
[2023-08-16T14:10:04.911+0000] {scheduler_job_runner.py:625} INFO - Sending TaskInstanceKey(dag_id='example_task_group_expansion', task_id='TaskDistributer', run_id='scheduled__2023-08-01T00:00:00+00:00', try_number=1, map_index=-1) to executor with priority 1 and queue default
[2023-08-16T14:10:04.911+0000] {base_executor.py:147} INFO - Adding to queue: ['airflow', 'tasks', 'run', 'example_task_group_expansion', 'TaskDistributer', 'scheduled__2023-08-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/example.py']
[2023-08-16T14:10:04.915+0000] {local_executor.py:86} INFO - QueuedLocalWorker running ['airflow', 'tasks', 'run', 'example_task_group_expansion', 'TaskDistributer', 'scheduled__2023-08-01T00:00:00+00:00', '--local', '--subdir', 'DAGS_FOLDER/example.py']
[2023-08-16T14:10:04.948+0000] {scheduler_job_runner.py:1449} DEBUG - DAG example_task_group_expansion not changed structure, skipping dagrun.verify_integrity
[2023-08-16T14:10:04.954+0000] {dagrun.py:711} DEBUG - number of tis tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:04.954+0000] {dagrun.py:732} DEBUG - number of scheduleable tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 1 task(s)
[2023-08-16T14:10:04.954+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Not In Retry Period' PASSED: True, The task instance was not marked for retrying.
[2023-08-16T14:10:04.954+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> dependency 'Previous Dagrun State' PASSED: True, The task did not have depends_on_past set.
[2023-08-16T14:10:04.958+0000] {taskinstance.py:899} DEBUG - Setting task state for <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [None]> to upstream_failed
[2023-08-16T14:10:04.958+0000] {taskinstance.py:1112} DEBUG - <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [upstream_failed]> dependency 'Trigger Rule' PASSED: False, Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=1, removed=0, done=1), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.958+0000] {taskinstance.py:1093} DEBUG - Dependencies not met for <TaskInstance: example_task_group_expansion.tg1.task2 scheduled__2023-08-01T00:00:00+00:00 [upstream_failed]>, dependency 'Trigger Rule' FAILED: Task's trigger rule 'all_success' requires all upstream tasks to have succeeded, but found 1 non-success(es). upstream_states=_UpstreamTIStates(success=0, skipped=0, failed=0, upstream_failed=1, removed=0, done=1), upstream_task_ids={'tg1.task1'}
[2023-08-16T14:10:04.963+0000] {scheduler_job_runner.py:1476} DEBUG - Skipping SLA check for <DAG: example_task_group_expansion> because no tasks in DAG have SLAs
[2023-08-16T14:10:05.236+0000] {dagbag.py:506} DEBUG - Loaded DAG <DAG: example_task_group_expansion>
Changing /usr/local/airflow/logs/dag_id=example_task_group_expansion/run_id=scheduled__2023-08-01T00:00:00+00:00/task_id=TaskDistributer permission to 509
[2023-08-16T14:10:05.265+0000] {task_command.py:410} INFO - Running <TaskInstance: example_task_group_expansion.TaskDistributer scheduled__2023-08-01T00:00:00+00:00 [queued]> on host 3d4e47791238
[2023-08-16T14:10:05.453+0000] {listener.py:32} INFO - TaskInstance Details: dag_id=example_task_group_expansion, task_id=TaskDistributer, dagrun_id=scheduled__2023-08-01T00:00:00+00:00, map_index=-1, run_start_date=2023-08-16 14:10:05.346669+00:00, try_number=1, job_id=302, op_classpath=airflow.decorators.python._PythonDecoratedOperator, airflow.decorators.base.DecoratedOperator, airflow.operators.python.PythonOperator
[2023-08-16T14:10:06.001+0000] {scheduler_job_runner.py:1449} DEBUG - DAG example_task_group_expansion not changed structure, skipping dagrun.verify_integrity
[2023-08-16T14:10:06.002+0000] {dagrun.py:711} DEBUG - number of tis tasks for <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False>: 3 task(s)
[2023-08-16T14:10:06.002+0000] {dagrun.py:609} ERROR - Marking run <DagRun example_task_group_expansion @ 2023-08-01 00:00:00+00:00: scheduled__2023-08-01T00:00:00+00:00, state:running, queued_at: 2023-08-16 14:10:04.858967+00:00. externally triggered: False> failed
[2023-08-16T14:10:06.002+0000] {dagrun.py:681} INFO - DagRun Finished: dag_id=example_task_group_expansion, execution_date=2023-08-01 00:00:00+00:00, run_id=scheduled__2023-08-01T00:00:00+00:00, run_start_date=2023-08-16 14:10:04.875813+00:00, run_end_date=2023-08-16 14:10:06.002810+00:00, run_duration=1.126997, state=failed, external_trigger=False, run_type=scheduled, data_interval_start=2023-08-01 00:00:00+00:00, data_interval_end=2023-08-01 00:00:00+00:00, dag_hash=a89f91f4d5dab071c49b1d98a4bd5c13
[2023-08-16T14:10:06.004+0000] {dag.py:3504} INFO - Setting next_dagrun for example_task_group_expansion to None, run_after=None
[2023-08-16T14:10:06.005+0000] {scheduler_job_runner.py:1476} DEBUG - Skipping SLA check for <DAG: example_task_group_expansion> because no tasks in DAG have SLAs
[2023-08-16T14:10:06.010+0000] {base_executor.py:299} DEBUG - Changing state: TaskInstanceKey(dag_id='example_task_group_expansion', task_id='TaskDistributer', run_id='scheduled__2023-08-01T00:00:00+00:00', try_number=1, map_index=-1)
[2023-08-16T14:10:06.011+0000] {scheduler_job_runner.py:677} INFO - Received executor event with state success for task instance TaskInstanceKey(dag_id='example_task_group_expansion', task_id='TaskDistributer', run_id='scheduled__2023-08-01T00:00:00+00:00', try_number=1, map_index=-1)
[2023-08-16T14:10:06.012+0000] {scheduler_job_runner.py:713} INFO - TaskInstance Finished: dag_id=example_task_group_expansion, task_id=TaskDistributer, run_id=scheduled__2023-08-01T00:00:00+00:00, map_index=-1, run_start_date=2023-08-16 14:10:05.346669+00:00, run_end_date=2023-08-16 14:10:05.518275+00:00, run_duration=0.171606, state=success, executor_state=success, try_number=1, max_tries=0, job_id=302, pool=default_pool, queue=default, priority_weight=1, operator=_PythonDecoratedOperator, queued_dttm=2023-08-16 14:10:04.910449+00:00, queued_by_job_id=289, pid=232
```
As can be seen from the logs, no upstream tasks are in `done` state yet the expanded task is set as `upstream_failed`.
[slack discussion](https://apache-airflow.slack.com/archives/CCQ7EGB1P/p1692107385230939)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33446 | https://github.com/apache/airflow/pull/33732 | 869f84e9c398dba453456e89357876ed8a11c547 | fe27031382e2034b59a23db1c6b9bdbfef259137 | "2023-08-16T15:21:54Z" | python | "2023-08-29T16:48:43Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,377 | ["docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | Statsd metrics description is incorrect | ### What do you see as an issue?
<img width="963" alt="Screenshot 2023-08-14 at 9 55 11 AM" src="https://github.com/apache/airflow/assets/10162465/bb493eb2-1cfd-45bb-928a-a4e21e015251">
Here dagrun duration success and failure have description where success is stored in seconds while that of failure is shown as stored in milliseconds from the description.
But when checked in code part, where this two metrics are recorded, it is the same duration time that gets stored varying depending on the dag_id run state.
<img width="963" alt="Screenshot 2023-08-14 at 10 00 28 AM" src="https://github.com/apache/airflow/assets/10162465/53d5aaa8-4c57-4357-b9c2-8d64164fad7c">
### Solving the problem
It looks the documentation description for that statsd metrics is misleading
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33377 | https://github.com/apache/airflow/pull/34532 | 08729eddbd7414b932a654763bf62c6221a0e397 | 117e40490865f04aed38a18724fc88a8cf94aacc | "2023-08-14T04:31:46Z" | python | "2023-09-21T18:53:34Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,375 | ["airflow/models/taskinstance.py", "airflow/operators/python.py", "airflow/utils/context.py", "airflow/utils/context.pyi", "tests/operators/test_python.py"] | Ability to retrieve prev_end_date_success | ### Discussed in https://github.com/apache/airflow/discussions/33345
<div type='discussions-op-text'>
<sup>Originally posted by **vuphamcs** August 11, 2023</sup>
### Description
Is there a variable similar to `prev_start_date_success` but for the previous DAG run’s completion date? The value I’m hoping to retrieve to use within the next DAG run is `2023-08-10 16:04:30`

### Use case/motivation
One particular use case is to help guarantee that the next DAG run only queries data that was inserted during the existing DAG run, and not the past DAG run.
```python
prev_ts = context['prev_end_date_success']
sql = f"SELECT * FROM results WHERE created_at > {context['prev_end_date_success']}"
```
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</div> | https://github.com/apache/airflow/issues/33375 | https://github.com/apache/airflow/pull/34528 | 2bcd450e84426fd678b3fa2e4a15757af234e98a | 61a9ab7600a856bb2b1031419561823e227331da | "2023-08-13T23:52:10Z" | python | "2023-11-03T18:31:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,344 | ["airflow/config_templates/config.yml", "airflow/www/views.py", "newsfragments/33351.significant.rst"] | Not able to trigger DAG with config from UI if param is not defined in a DAG and dag_run.conf is used | ### Apache Airflow version
2.7.0rc1
### What happened
As per https://github.com/apache/airflow/pull/31583, now we can only run DAG with config from UI if DAG has params, however, if a DAG is using dag_run.conf there is no way to run with config from UI and as dag_run.conf is not deprecated most of the users will be impacted by this
@hussein-awala also mentioned it in his [voting](https://lists.apache.org/thread/zd9ppxw1xwxsl66w0tyw1wch9flzb03w)
### What you think should happen instead
I think there should be a way to provide param values from UI when dag_run.conf is used in DAG
### How to reproduce
Use below DAG in 2.7.0rc and you will notice there is no way to provide conf value from airflow UI
DAG CODE
```
from airflow.models import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.python import PythonOperator
from airflow.utils.dates import days_ago
dag = DAG(
dag_id="trigger_target_dag",
default_args={"start_date": days_ago(2), "owner": "Airflow"},
tags=["core"],
schedule_interval=None, # This must be none so it's triggered by the controller
is_paused_upon_creation=False, # This must be set so other workers can pick this dag up. mabye it's a bug idk
)
def run_this_func(**context):
print(
f"Remotely received value of {context['dag_run'].conf['message']} for key=message "
)
run_this = PythonOperator(
task_id="run_this",
python_callable=run_this_func,
dag=dag,
)
bash_task = BashOperator(
task_id="bash_task",
bash_command='echo "Here is the message: $message"',
env={"message": '{{ dag_run.conf["message"] if dag_run else "" }}'},
dag=dag,
)
```
### Operating System
MAS os Monterey
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33344 | https://github.com/apache/airflow/pull/33351 | 45713446f37ee4b1ee972ab8b5aa1ac0b2482197 | c0362923fd8250328eab6e60f0cf7e855bfd352e | "2023-08-12T09:34:55Z" | python | "2023-08-13T12:57:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,325 | ["airflow/www/views.py"] | providers view shows description with HTML element | ### Body
In Admin -> Providers view
The description shows a `<br>`
<img width="1286" alt="Screenshot 2023-08-11 at 21 54 25" src="https://github.com/apache/airflow/assets/45845474/2cdba81e-9cea-4ed4-8420-1e9ab4c2eee2">
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/33325 | https://github.com/apache/airflow/pull/33326 | 682176d57263aa2aab1aa8703723270ab3148af4 | 23d542462a1aaa5afcd36dedc3c2a12c840e1d2c | "2023-08-11T19:04:02Z" | python | "2023-08-11T22:58:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,323 | ["tests/jobs/test_triggerer_job.py"] | Flaky test `test_trigger_firing` with ' SQLite objects created in a thread can only be used in that same thread.' | ### Body
Observed in https://github.com/apache/airflow/actions/runs/5835505313/job/15827357798?pr=33309
```
___________________ ERROR at teardown of test_trigger_firing ___________________
self = <sqlalchemy.future.engine.Connection object at 0x7f92d3327910>
def _rollback_impl(self):
assert not self.__branch_from
if self._has_events or self.engine._has_events:
self.dispatch.rollback(self)
if self._still_open_and_dbapi_connection_is_valid:
if self._echo:
if self._is_autocommit_isolation():
self._log_info(
"ROLLBACK using DBAPI connection.rollback(), "
"DBAPI should ignore due to autocommit mode"
)
else:
self._log_info("ROLLBACK")
try:
> self.engine.dialect.do_rollback(self.connection)
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1062:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7f92d9698650>
dbapi_connection = <sqlalchemy.pool.base._ConnectionFairy object at 0x7f92d33839d0>
def do_rollback(self, dbapi_connection):
> dbapi_connection.rollback()
E sqlite3.ProgrammingError: SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140269149157120 and this is thread id 140269532822400.
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py:683: ProgrammingError
The above exception was the direct cause of the following exception:
@pytest.fixture(autouse=True, scope="function")
def close_all_sqlalchemy_sessions():
from sqlalchemy.orm import close_all_sessions
close_all_sessions()
yield
> close_all_sessions()
tests/conftest.py:953:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:4315: in close_all_sessions
sess.close()
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:1816: in close
self._close_impl(invalidate=False)
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:1858: in _close_impl
transaction.close(invalidate)
/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:926: in close
transaction.close()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2426: in close
self._do_close()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2649: in _do_close
self._close_impl()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2635: in _close_impl
self._connection_rollback_impl()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2627: in _connection_rollback_impl
self.connection._rollback_impl()
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1064: in _rollback_impl
self._handle_dbapi_exception(e, None, None, None, None)
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2134: in _handle_dbapi_exception
util.raise_(
/usr/local/lib/python3.11/site-packages/sqlalchemy/util/compat.py:211: in raise_
raise exception
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1062: in _rollback_impl
self.engine.dialect.do_rollback(self.connection)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7f92d9698650>
dbapi_connection = <sqlalchemy.pool.base._ConnectionFairy object at 0x7f92d33839d0>
def do_rollback(self, dbapi_connection):
> dbapi_connection.rollback()
E sqlalchemy.exc.ProgrammingError: (sqlite3.ProgrammingError) SQLite objects created in a thread can only be used in that same thread. The object was created in thread id 140269149157120 and this is thread id 140269532822400.
E (Background on this error at: https://sqlalche.me/e/14/f405)
/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py:683: ProgrammingError
------------------------------ Captured log call -------------------------------
INFO airflow.jobs.triggerer_job_runner:triggerer_job_runner.py:171 Setting up TriggererHandlerWrapper with handler <FileTaskHandler (NOTSET)>
INFO airflow.jobs.triggerer_job_runner:triggerer_job_runner.py:227 Setting up logging queue listener with handlers [<LocalQueueHandler (NOTSET)>, <TriggererHandlerWrapper (NOTSET)>]
INFO airflow.jobs.triggerer_job_runner.TriggerRunner:triggerer_job_runner.py:596 trigger test_dag/test_run/test_ti/-1/1 (ID 1) starting
INFO airflow.jobs.triggerer_job_runner.TriggerRunner:triggerer_job_runner.py:600 Trigger test_dag/test_run/test_ti/-1/1 (ID 1) fired: TriggerEvent<True>
Level 100 airflow.triggers.testing.SuccessTrigger:triggerer_job_runner.py:633 trigger end
INFO airflow.jobs.triggerer_job_runner.TriggerRunner:triggerer_job_runner.py:622 trigger test_dag/test_run/test_ti/-1/1 (ID 1) completed
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/33323 | https://github.com/apache/airflow/pull/34075 | 601b9cd33c5f1a92298eabb3934a78fb10ca9a98 | 47f79b9198f3350951dc21808c36f889bee0cd06 | "2023-08-11T18:50:00Z" | python | "2023-09-04T14:40:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,319 | [".github/workflows/release_dockerhub_image.yml"] | Documentation outdated on dockerhub | ### What do you see as an issue?
On:
https://hub.docker.com/r/apache/airflow
It says in several places that the last version is 2.3.3 like here:

### Solving the problem
Update the version and requirements to 2.6.3.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33319 | https://github.com/apache/airflow/pull/33348 | 50765eb0883652c16b40d69d8a1ac78096646610 | 98fb7d6e009aaf4bd06ffe35e526af2718312607 | "2023-08-11T17:06:15Z" | python | "2023-08-12T14:22:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,300 | ["airflow/providers/mysql/hooks/mysql.py", "tests/providers/mysql/hooks/test_mysql.py", "tests/providers/mysql/hooks/test_mysql_connector_python.py"] | MySqlHook add support for init_command | ### Description
There is currently no way to pass an `init_command` connection argument for a mysql connection when using either the `mysqlclient` or `mysql-connector-python` libraries with the MySql provider's `MySqlHook`.
Documentation for connection arguments for `mysqlclient` library, listing `init_command`:
https://mysqlclient.readthedocs.io/user_guide.html?highlight=init_command#functions-and-attributes
Documentation for connection arguments for `mysql-connector-python` library, listing `init_command`:
https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html
There can be many uses for `init_command`, but also what comes to mind is why do we explicitly provide support to certain connection arguments and not others?
### Use case/motivation
For my own use right now I am currently am subclassing the hook and then altering the connection arguments to pass in `init_command` to set the `time_zone` session variable to UTC at connection time, like so:
```python
conn_config['init_command'] = r"""SET time_zone = '+00:00';"""
```
Note: This is just an example, there can be many other uses for `init_command` besides the example above. Also, I am aware there is a `time_zone` argument for connections via the `mysql-connector-python` library, however that argument is not supported by connections made with `mysqlclient` library. Both libraries do support the `init_command` argument.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33300 | https://github.com/apache/airflow/pull/33359 | ea8519c0554d16b13d330a686f8479fc10cc58f2 | dce9796861e0a535952f79b0e2a7d5a012fcc01b | "2023-08-11T00:41:55Z" | python | "2023-08-18T05:58:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,256 | ["airflow/sensors/time_sensor.py", "tests/sensors/test_time_sensor.py"] | TimeSensorAsync does not use DAG timezone to convert naive time input | ### Apache Airflow version
2.6.3
### What happened
TimeSensor and TimeSensorAsync convert timezones differently.
TimeSensor converts a naive time into an tz-aware time with `self.dag.timezone`. TimeSensorAsync does not, and erronously converts it to UTC instead.
### What you think should happen instead
TimeSensor and TimeSensorAsync should behave the same.
### How to reproduce
Compare the logic of TimeSensor versus TimeSensorAsync, given a DAG with a UTC+2 (for example `Europe/Berlin`) timezone and the target_time input of `datetime.time(9, 0)`.
### Operating System
Official container image, Debian GNU/Linux 11 (bullseye), Python 3.10.12
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
EKS + Kustomize stack with airflow-ui, airflow-scheduler, and airflow-triggerer.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33256 | https://github.com/apache/airflow/pull/33406 | 84a3daed8691d5e129eaf3e02061efb8b6ca56cb | 6c50ef59cc4f739f126e5b123775340a3351a3e8 | "2023-08-09T11:52:35Z" | python | "2023-10-12T03:27:19Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,255 | ["airflow/providers/microsoft/azure/secrets/key_vault.py"] | Azure KeyVault Backend logging level | ### Apache Airflow version
2.6.3
### What happened
I've set up Azure keyvaults as a [backend](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/secrets-backends/azure-key-vault.html) for fetching connections, and it works fine. However, there's just too much logging and it's causing issues for our users to read logs. For example:
```
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://REDACTED.vault.azure.net/secrets/airflow-connections-ode-odbc-dev-dw/?api-version=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'User-Agent': 'azsdk-python-keyvault-secrets/4.7.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 401
Response headers:
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Content-Length': '97'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'WWW-Authenticate': 'Bearer authorization="https://login.microsoftonline.com/100b3c99-f3e2-4da0-9c8a-b9d345742c36", resource="https://vault.azure.net"'
'x-ms-keyvault-region': 'REDACTED'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'x-ms-request-id': '563d7428-9df4-4d6a-9766-19626395056f'
'x-ms-keyvault-service-version': '1.9.908.1'
'x-ms-keyvault-network-info': 'conn_type=Ipv4;addr=20.76.1.64;act_addr_fam=InterNetwork;'
'X-Content-Type-Options': 'REDACTED'
'Strict-Transport-Security': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://login.microsoftonline.com/100b3c99-f3e2-4da0-9c8a-b9d345742c36/v2.0/.well-known/openid-configuration'
Request method: 'GET'
Request headers:
'User-Agent': 'azsdk-python-identity/1.13.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 200
Response headers:
'Cache-Control': 'max-age=86400, private'
'Content-Type': 'application/json; charset=utf-8'
'Strict-Transport-Security': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Access-Control-Allow-Origin': 'REDACTED'
'Access-Control-Allow-Methods': 'REDACTED'
'P3P': 'REDACTED'
'x-ms-request-id': '80869b0e-4cde-47f7-8721-3f430a8c3600'
'x-ms-ests-server': 'REDACTED'
'X-XSS-Protection': 'REDACTED'
'Set-Cookie': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
'Content-Length': '1753'
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://login.microsoftonline.com/common/discovery/instance?api-version=REDACTED&authorization_endpoint=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'User-Agent': 'azsdk-python-identity/1.13.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 200
Response headers:
'Cache-Control': 'max-age=86400, private'
'Content-Type': 'application/json; charset=utf-8'
'Strict-Transport-Security': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Access-Control-Allow-Origin': 'REDACTED'
'Access-Control-Allow-Methods': 'REDACTED'
'P3P': 'REDACTED'
'x-ms-request-id': '93b3dfad-72c7-4629-8625-d2b335363a00'
'x-ms-ests-server': 'REDACTED'
'X-XSS-Protection': 'REDACTED'
'Set-Cookie': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
'Content-Length': '945'
[2023-08-09, 13:32:30 CEST] {_universal.py:510} INFO - Request URL: 'https://login.microsoftonline.com/100b3c99-f3e2-4da0-9c8a-b9d345742c36/oauth2/v2.0/token'
Request method: 'POST'
Request headers:
'Accept': 'application/json'
'x-client-sku': 'REDACTED'
'x-client-ver': 'REDACTED'
'x-client-os': 'REDACTED'
'x-client-cpu': 'REDACTED'
'x-ms-lib-capability': 'REDACTED'
'client-request-id': 'REDACTED'
'x-client-current-telemetry': 'REDACTED'
'x-client-last-telemetry': 'REDACTED'
'User-Agent': 'azsdk-python-identity/1.13.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
A body is sent with the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 200
Response headers:
'Cache-Control': 'no-store, no-cache'
'Pragma': 'no-cache'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'Strict-Transport-Security': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'P3P': 'REDACTED'
'client-request-id': 'REDACTED'
'x-ms-request-id': '79cf595b-4f41-47c3-a370-f9321c533a00'
'x-ms-ests-server': 'REDACTED'
'x-ms-clitelem': 'REDACTED'
'X-XSS-Protection': 'REDACTED'
'Set-Cookie': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
'Content-Length': '1313'
[2023-08-09, 13:32:30 CEST] {chained.py:87} INFO - DefaultAzureCredential acquired a token from EnvironmentCredential
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://REDACTED.vault.azure.net/secrets/airflow-connections-ode-odbc-dev-dw/?api-version=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'User-Agent': 'azsdk-python-keyvault-secrets/4.7.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
'Authorization': 'REDACTED'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 404
Response headers:
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Content-Length': '332'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'x-ms-keyvault-region': 'REDACTED'
'x-ms-client-request-id': '6cdf2a74-36a8-11ee-8cac-6ac595ee5ea6'
'x-ms-request-id': 'ac41c47c-30f0-46cf-9157-6e5dba031ffa'
'x-ms-keyvault-service-version': '1.9.908.1'
'x-ms-keyvault-network-info': 'conn_type=Ipv4;addr=20.76.1.64;act_addr_fam=InterNetwork;'
'x-ms-keyvault-rbac-assignment-id': 'REDACTED'
'x-ms-keyvault-rbac-cache': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Strict-Transport-Security': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
[2023-08-09, 13:32:30 CEST] {base.py:73} INFO - Using connection ID 'ode-odbc-dev-dw' for task execution.
[2023-08-09, 13:32:30 CEST] {_universal.py:513} INFO - Request URL: 'https://REDACTED.vault.azure.net/secrets/airflow-connections-ode-odbc-dev-dw/?api-version=REDACTED'
Request method: 'GET'
Request headers:
'Accept': 'application/json'
'x-ms-client-request-id': '6d2b797e-36a8-11ee-8cac-6ac595ee5ea6'
'User-Agent': 'azsdk-python-keyvault-secrets/4.7.0 Python/3.9.17 (Linux-5.4.0-1111-azure-x86_64-with-glibc2.31)'
'Authorization': 'REDACTED'
No body was attached to the request
[2023-08-09, 13:32:30 CEST] {_universal.py:549} INFO - Response status: 404
Response headers:
'Cache-Control': 'no-cache'
'Pragma': 'no-cache'
'Content-Length': '332'
'Content-Type': 'application/json; charset=utf-8'
'Expires': '-1'
'x-ms-keyvault-region': 'REDACTED'
'x-ms-client-request-id': '6d2b797e-36a8-11ee-8cac-6ac595ee5ea6'
'x-ms-request-id': 'ac37f859-14cc-48e3-8a88-6214b96ef75e'
'x-ms-keyvault-service-version': '1.9.908.1'
'x-ms-keyvault-network-info': 'conn_type=Ipv4;addr=20.76.1.64;act_addr_fam=InterNetwork;'
'x-ms-keyvault-rbac-assignment-id': 'REDACTED'
'x-ms-keyvault-rbac-cache': 'REDACTED'
'X-Content-Type-Options': 'REDACTED'
'Strict-Transport-Security': 'REDACTED'
'Date': 'Wed, 09 Aug 2023 11:32:30 GMT'
[2023-08-09, 13:32:31 CEST] {base.py:73} INFO - Using connection ID 'ode-odbc-dev-dw' for task execution.
```
Changing airflow logging_level to WARNING/ERROR is one way, but then the task logs don't have sufficient information. Is it possible to influence just the logging level on SecretClient?
### What you think should happen instead
Ideally, it should be possible to set logging level specifically for the keyvault backend in the backend_kwargs:
```
backend_kwargs = {"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "vault_url": "https://example-akv-resource-name.vault.azure.net/", "logging_level": "WARNING"}
```
### How to reproduce
Set up KV backend as described [here](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/secrets-backends/azure-key-vault.html).
### Operating System
Debian GNU/Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed with Helm chart on AKS.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33255 | https://github.com/apache/airflow/pull/33314 | dfb2403ec4b6d147ac31125631677cee9e12347e | 4460356c03e5c1dedd72ce87a8ccfb9b19a33d76 | "2023-08-09T11:31:35Z" | python | "2023-08-13T22:40:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,248 | ["airflow/providers/amazon/aws/hooks/glue.py", "airflow/providers/amazon/aws/operators/glue.py", "tests/providers/amazon/aws/hooks/test_glue.py", "tests/providers/amazon/aws/operators/test_glue.py"] | GlueOperator: iam_role_arn as a parameter | ### Description
Hi,
There is mandatory parameter iam_role_name parameter for GlueJobOperator/GlueJobHook.
It adds additional step of translating it to the arn, which needs connectivity to the global iam AWS endpoint (no privatelink availabale).
For private setups it needs opening connectivity + proxy configuration to make it working.
It would be great to have also possibility to just pass directly iam_role_arn and avoid this additional step.
### Use case/motivation
Role assignation does not need external connectivity, possibility of adding arn instead of the name.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33248 | https://github.com/apache/airflow/pull/33408 | cc360b73c904b7f24a229282458ee05112468f5d | 60df70526a00fb9a3e245bb3ffb2a9faa23582e7 | "2023-08-09T07:59:10Z" | python | "2023-08-15T21:20:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,217 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | get_current_context not present in user_defined_macros | ### Apache Airflow version
2.6.3
### What happened
get_current_context() fail in a user_defined_macros
give
```
{abstractoperator.py:594} ERROR - Exception rendering Jinja template for task 'toot', field 'op_kwargs'. Template: {'arg': '{{ macro_run_id() }}'}
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/abstractoperator.py", line 586, in _do_render_template_fields
rendered_content = self.render_template(
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 168, in render_template
return {k: self.render_template(v, context, jinja_env, oids) for k, v in value.items()}
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 168, in <dictcomp>
return {k: self.render_template(v, context, jinja_env, oids) for k, v in value.items()}
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 156, in render_template
return self._render(template, context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/abstractoperator.py", line 540, in _render
return super()._render(template, context, dag=dag)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/template/templater.py", line 113, in _render
return render_template_to_string(template, context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/helpers.py", line 288, in render_template_to_string
return render_template(template, cast(MutableMapping[str, Any], context), native=False)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/helpers.py", line 283, in render_template
return "".join(nodes)
File "<template>", line 12, in root
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/sandbox.py", line 393, in call
return __context.call(__obj, *args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/jinja2/runtime.py", line 298, in call
return __obj(*args, **kwargs)
File "/opt/airflow/dags/dags/exporter/finance_closing.py", line 7, in macro_run_id
schedule_interval = get_current_context()["dag"].schedule_interval.replace("@", "")
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 784, in get_current_context
raise AirflowException(
airflow.exceptions.AirflowException: Current context was requested but no context was found! Are you running within an airflow task?
```
### What you think should happen instead
User macros should be able to access to the current context
```
airflow.exceptions.AirflowException: Current context was requested but no context was found! Are you running within an airflow task?
```
### How to reproduce
```python
from airflow.models import DAG
from airflow.operators.python import PythonOperator
from airflow.utils.dates import days_ago
def macro_run_id():
from airflow.operators.python import get_current_context
a = get_current_context()["dag"].schedule_interval.replace("@", "")
if a == "None":
a = "manual"
return a
with DAG(dag_id="example2",
start_date=days_ago(61),
user_defined_macros={"macro_run_id": macro_run_id},
schedule_interval="@monthly"):
def toto(arg):
print(arg)
PythonOperator(task_id="toot", python_callable=toto,
op_kwargs={"arg": "{{ macro_run_id() }}"})
```
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33217 | https://github.com/apache/airflow/pull/33645 | 47682042a45501ab235d612580b8284a8957523e | 9fa782f622ad9f6e568f0efcadf93595f67b8a20 | "2023-08-08T17:17:47Z" | python | "2023-08-24T13:33:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,203 | ["airflow/providers/microsoft/azure/hooks/wasb.py", "tests/providers/microsoft/azure/hooks/test_wasb.py"] | Provider apache-airflow-providers-microsoft-azure no longer==6.2.3 expose `account_name` | ### Apache Airflow version
2.6.3
### What happened
Till version apache-airflow-providers-microsoft-azure no longer==6.2.2 if you do `WasbHook(wasb_conn_id=self.conn_id).get_conn().account_name` you will get the `account_name` But in version `apache-airflow-providers-microsoft-azure==6.2.3` this is not longer working for below connection:
```
- conn_id: wasb_conn_with_access_key
conn_type: wasb
host: astrosdk.blob.core.windows.net
description: null
extra:
shared_access_key: $AZURE_WASB_ACCESS_KEY
```
### What you think should happen instead
We should get the `account_name` for `apache-airflow-providers-microsoft-azure==6.2.3`.
### How to reproduce
Try installing the version `apache-airflow-providers-microsoft-azure==6.2.3` and try running below code `WasbHook(wasb_conn_id=self.conn_id).get_conn().account_name`
### Operating System
Mac
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33203 | https://github.com/apache/airflow/pull/33457 | 8b7e0babe1c3e9bef6e934d1e362564bc73fda4d | bd608a56abd1a6c2a98987daf7f092d2dabea555 | "2023-08-08T12:00:13Z" | python | "2023-08-17T07:55:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,138 | ["airflow/providers/redis/sensors/redis_pub_sub.py", "tests/providers/redis/sensors/test_redis_pub_sub.py"] | Move redis subscribe to poke() method in Redis Sensor (#32984): @potiuk | The fix has a bug (subscribe happens too frequently) | https://github.com/apache/airflow/issues/33138 | https://github.com/apache/airflow/pull/33139 | 76ca94d2f23de298bb46668998c227a86b4ecbd0 | 29a59de237ccd42a3a5c20b10fc4c92b82ff4475 | "2023-08-05T09:05:21Z" | python | "2023-08-05T10:28:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,099 | ["chart/templates/_helpers.yaml", "chart/templates/configmaps/configmap.yaml", "chart/templates/scheduler/scheduler-deployment.yaml", "chart/templates/webserver/webserver-deployment.yaml", "chart/values.schema.json", "chart/values.yaml", "helm_tests/airflow_core/test_scheduler.py", "helm_tests/webserver/test_webserver.py"] | Add startupProbe to airflow helm charts | ### Description
Introducing a startupProbe onto the airflow services would be useful for slow starting container and most of all it doesn't have side effects.
### Use case/motivation
We have an internal feature where we perform a copy of venv from airflow services to cloud storages which can sometimes take a few minutes. Copying of a venv is a metadata heavy load: https://learn.microsoft.com/en-us/troubleshoot/azure/azure-storage/files-troubleshoot-performance?tabs=linux#cause-2-metadata-or-namespace-heavy-workload.
Introducing a startupProbe onto the airflow services would be useful for slow starting container and most of all it doesn't have side effects.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33099 | https://github.com/apache/airflow/pull/33107 | 9736143468cfe034e65afb3df3031ab3626f0f6d | ca5acda1617a5cdb1d04f125568ffbd264209ec7 | "2023-08-04T07:14:41Z" | python | "2023-08-07T20:03:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,061 | ["airflow/utils/log/secrets_masker.py", "tests/utils/log/test_secrets_masker.py"] | TriggerDagRunOperator DAG task log showing Warning: Unable to redact <DagRunState.SUCCESS: 'success'> | ### Apache Airflow version
main (development)
### What happened
When TriggerDagRunOperator task log showing below warning
`WARNING - Unable to redact <DagRunState.SUCCESS: 'success'>, please report this via <https://github.com/apache/airflow/issues>. Error was: TypeError: EnumMeta.__call__() missing 1 required positional argument: 'value'`
<img width="1479" alt="image" src="https://github.com/apache/airflow/assets/43964496/0c183ffc-2440-49ee-b8d0-951ddc078c36">
### What you think should happen instead
There should not be any warning in logs
### How to reproduce
Steps to Repo:
1. Launch airflow using Breeze with main
2. Trigger any TriggerDagRunOperator
3. Check logs
DAG :
```python
from airflow import DAG
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow.operators.dummy import DummyOperator
from airflow.utils.dates import days_ago
"""This example illustrates the use of the TriggerDagRunOperator. There are 2
entities at work in this scenario:
1. The Controller DAG - the DAG that conditionally executes the trigger
2. The Target DAG - DAG being triggered (in trigger_dagrun_target.py)
"""
dag = DAG(
dag_id="trigger_controller_dag",
default_args={"owner": "airflow", "start_date": days_ago(2)},
schedule_interval=None,
tags=["core"],
)
trigger = TriggerDagRunOperator(
task_id="test_trigger_dagrun",
trigger_dag_id="trigger_target_dag",
reset_dag_run=True,
wait_for_completion=True,
conf={"message": "Hello World"},
dag=dag,
)
```
Note: create a DAG `trigger_target_dag` which maybe sleeps for sometime
### Operating System
OS x
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33061 | https://github.com/apache/airflow/pull/33065 | 1ff33b800246fdbfa7aebe548055409d64307f46 | b0f61be2f9791b75da3bca0bc30fdbb88e1e0a8a | "2023-08-03T05:57:24Z" | python | "2023-08-03T13:30:18Z" |
closed | apache/airflow | https://github.com/apache/airflow | 33,014 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Clearing task from List Task Instance page in UI does not also clear downstream tasks? | ### Apache Airflow version
2.6.3
### What happened
Select tasks from List Task Instance page in UI and select clear
Only those tasks are cleared and downsteam tasks are not also cleared as they are in the DAG graph view
### What you think should happen instead
downstream tasks should also be cleared
### How to reproduce
Select tasks from List Task Instance page in UI for which there are downstream tasks and select clear
### Operating System
rocky
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/33014 | https://github.com/apache/airflow/pull/34529 | 541c9addb6b2ee56244793503cbf5c218e80dec8 | 5b0ce3db4d36e2a7f20a78903daf538bbde5e38a | "2023-08-01T19:39:33Z" | python | "2023-09-22T17:54:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,993 | ["airflow/providers/vertica/hooks/vertica.py", "tests/providers/vertica/hooks/test_vertica.py"] | Error not detected in multi-statement vertica query | ### Apache Airflow version
2.6.3
### What happened
Hello,
There is a problem with multi-statement query and vertica, error will be detected only if it happens on the first statement of the sql.
for example if I run the following sql with default SQLExecuteQueryOperator options:
INSERT INTO MyTable (Key, Label) values (1, 'test 1');
INSERT INTO MyTable (Key, Label) values (1, 'test 2');
INSERT INTO MyTable (Key, Label) values (3, 'test 3');
the first insert will be commited, the nexts won't and no errors will be returned.
the same sql runed on mysql will return an error and no row will be inserted.
It seems to be linked to the way the vertica python client works (an issue has been opened on their git 4 years ago, [Duplicate key values error is not thrown as exception and is getting ignored](https://github.com/vertica/vertica-python/issues/255)) but since a workaround was provided I don't think it will be fixed in a near future.
For the moment, to workaroud I use the split statement option with disabling auto-commit but I think it's dangerous to let this behaviour as is.
### What you think should happen instead
_No response_
### How to reproduce
create a table MyTable with two columns Key and Lbl, declare Key as primary key.
Run the following query with SQLExecuteQueryOperator
INSERT INTO MyTable (Key, Label) values (1, 'test 1');
INSERT INTO MyTable (Key, Label) values (1, 'test 2');
INSERT INTO MyTable (Key, Label) values (3, 'test 3');
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32993 | https://github.com/apache/airflow/pull/34041 | 6b2a0cb3c84eeeaec013c96153c6b9538c6e74c4 | 5f47e60962b3123b1e6c8b42bef2c2643f54b601 | "2023-08-01T08:06:25Z" | python | "2023-09-06T21:09:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,969 | ["airflow/providers/databricks/hooks/databricks_base.py", "docs/apache-airflow-providers-databricks/connections/databricks.rst", "tests/providers/databricks/hooks/test_databricks.py"] | Databricks support for Service Principal Oauth | ### Description
Authentication using OAuth for Databricks Service Principals is now in Public Preview. I would like to implement this into the Databricks Hook. By adding "service_principal_oauth" as a boolean value set to `true` in the extra configuration, the Client Id and Client Secret can be supplied as a username and password.
https://docs.databricks.com/dev-tools/authentication-oauth.html
### Use case/motivation
Before Authentication using Oauth the only way to use the Databricks Service Principals was by another user account performing a token request on-behalf of the Service Principal. This process is difficult to utilize in the real world, but this new way of collecting access tokens changes the process and should make a big difference.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32969 | https://github.com/apache/airflow/pull/33005 | a1b5bdb25a6f9565ac5934a9a458e9b079ccf3ae | 8bf53dd5545ecda0e5bbffbc4cc803cbbde719a9 | "2023-07-31T13:43:43Z" | python | "2023-08-14T10:16:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,897 | ["airflow/providers/amazon/aws/hooks/logs.py", "airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "airflow/providers/amazon/aws/utils/__init__.py", "tests/providers/amazon/aws/hooks/test_logs.py", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | Enhance Airflow Logs API to fetch logs from Amazon Cloudwatch with time range | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
MWAA Version: 2.4.3
Airflow Version: 2.4.3
Airflow Logs currently do not fetch logs from Cloudwatch without time range, so when the cloudwatch logs are large and CloudWatch log streams are OLD, the airflow UI cannot display logs with error message:
```
*** Reading remote log from Cloudwatch log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log.
Could not read remote logs from log_group: airflow-cdp-airflow243-XXXXXX-Task log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log
```
The Airflow API need to pass start and end timestamps to GetLogEvents API from Amazon CloudWatch to resolve this error and it also improves performance of fetching logs.
This is critical issue for customers when they would like to fetch logs to investigate failed pipelines form few days to weeks old
### What you think should happen instead
The Airflow API need to pass start and end timestamps to GetLogEvents API from Amazon CloudWatch to resolve this error.
This should also improve performance of fetching logs.
### How to reproduce
This issue is intermittent and happens mostly on FAILD tasks.
1. Log onto Amazon MWAA Service
2. Open Airflow UI
3. Select DAG
4. Select the Failed Tasks
5. Select Logs
You should see error message like below in the logs:
```
*** Reading remote log from Cloudwatch log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log.
Could not read remote logs from log_group: airflow-cdp-airflow243-XXXXXX-Task log_group: airflow-cdp-airflow243-XXXX-Task log_stream: dag_id=<DAG_NAME>/run_id=scheduled__2023-07-27T07_25_00+00_00/task_id=<TASK_ID>/attempt=1.log
```
### Operating System
Running with Amazon MWAA
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.3.1
apache-airflow==2.4.3
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32897 | https://github.com/apache/airflow/pull/33231 | 5707103f447be818ad4ba0c34874b822ffeefc09 | c14cb85f16b6c9befd35866327fecb4ab9bc0fc4 | "2023-07-27T21:01:44Z" | python | "2023-08-10T17:30:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,890 | ["airflow/www/static/js/connection_form.js"] | Airflow UI ignoring extra connection field during test connection | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
In Airflow 2.6.1 I can no longer use the `extra` field in any `http` based connection when testing the connection.
Inspecting the web request for testing the connection I see that the `extra` field is empty, even though I have data in there:
```json
{
"connection_id": "",
"conn_type": "http",
"extra": "{}"
}
```
<img width="457" alt="image" src="https://github.com/apache/airflow/assets/6411855/d6bab951-5d03-4695-a397-8bf6989d93a7">
I saw [this issue](https://github.com/apache/airflow/issues/31330#issuecomment-1558315370) which seems related. It was closed because the opener worked around the issue by creating the connection in code instead of the Airflow UI.
I couldn't find any other issues mentioning this problem.
### What you think should happen instead
The `extra` field should be included in the test connection request.
### How to reproduce
Create an `http` connection in the Airflow UI using at least version 2.6.1. Put any value in the `extra` field and test the connection while inspecting the network request. Notice that the `extra` field value is not supplied in the request.
### Operating System
N/A
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
If I had to guess, I think it might be related to [this PR](https://github.com/apache/airflow/pull/28583) where a json linter was added to the extra field.
Saving the connection seems to work fine, just not testing it.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32890 | https://github.com/apache/airflow/pull/35122 | ef497bc3412273c3a45f43f40e69c9520c7cc74c | 789222cb1378079e2afd24c70c1a6783b57e27e6 | "2023-07-27T17:45:31Z" | python | "2023-10-23T15:18:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,877 | ["dev/README_RELEASE_AIRFLOW.md"] | Wrong version in Dockerfile | ### Apache Airflow version
2.6.3
### What happened
I want to use `2.6.3` stable version of `Airflow`. I cloned the project and checkout on the `tags/2.6.3`.
```bash
git checkout tags/2.6.3 -b my_custom_branch
```
After checkout I check the `Dockerfile` and there is what I see below:
```bash
ARG AIRFLOW_VERSION="2.6.2"
```
Then I just download code as a `zip` [2.6.3 link](https://github.com/apache/airflow/releases/tag/2.6.3) and I see the same under `Dockerfile`.
Does `AIRFLOW_VERSION` have a wrong value ?
Thanks !
### What you think should happen instead
_No response_
### How to reproduce
I need confirmation that, version is definitely wrong under `Dockerfile`.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32877 | https://github.com/apache/airflow/pull/32888 | db8d737ad690b721270d0c2fd3a83f08d7ce5c3f | 7ba7fb1173e55c24c94fe01f0742fd00cd9c0d82 | "2023-07-27T07:47:07Z" | python | "2023-07-28T04:53:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,866 | ["airflow/providers/databricks/hooks/databricks.py", "airflow/providers/databricks/operators/databricks.py", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "tests/providers/databricks/operators/test_databricks.py"] | DatabricksSubmitRunOperator should accept a pipeline name for a pipeline_task | ### Description
It would be nice if we could give the DatabricksSubmitRunOperator a pipeline name instead of a pipeline_id for cases when you do not already know the pipeline_id but do know the name. I'm not sure if there's an easy way to fetch a pipeline_id.
### Use case/motivation
Avoid hardcoding pipeline ID's, storing the ID's elsewhere, or fetching the pipeline list and filtering it manually if the pipeline name is known, but ID is not.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32866 | https://github.com/apache/airflow/pull/32903 | f7f3b675ecd40e32e458b71b5066864f866a60c8 | c45617c4d5988555f2f52684e082b96b65ca6c17 | "2023-07-26T15:33:05Z" | python | "2023-09-07T00:44:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,862 | ["airflow/jobs/triggerer_job_runner.py"] | Change log level of message for event loop block | ### Description
Currently, when the event loop is blocked for more than 0.2 seconds, an error message is logged to the Triggerer notifying the user that the async thread was blocked, likely due to a badly written trigger.
The issue with this message is that there currently no support for async DB reads. So whenever a DB read is performed (for getting connection information etc.) the event loop is blocked for a short while (~0.3 - 0.8 seconds). This usually only happens once during a Trigger execution, and is not an issue at all in terms of performance.
Based on our internal user testing, I noticed that this error message causes confusion for a lot of users who are new to Deferrable operators. As such, I am proposing that we change the log level of that message to `INFO` so that the message is retained, but does not cause confusion. Until a method is available that would allow us to read from the database asynchronously, there is nothing that can be done about the message.
### Use case/motivation

I'd like the user to see this message as an INFO rather than an ERROR, because there it is not something that can be addressed at the moment, and it does not cause any noticeable impact to the user.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32862 | https://github.com/apache/airflow/pull/32979 | 6ada88a407a91a3e1d42ab8a30769a4a6f55588b | 9cbe494e231a5b2e92e6831a4be25802753f03e5 | "2023-07-26T12:57:35Z" | python | "2023-08-02T10:23:08Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,839 | ["airflow/www/security.py", "docs/apache-airflow/security/access-control.rst", "tests/www/test_security.py"] | DAG-level permissions set in Web UI disappear from roles on DAG sync | ### Apache Airflow version
2.6.3
### What happened
Versions: 2.6.2, 2.6.3, main
PR [#30340](https://github.com/apache/airflow/pull/30340) introduced a bug that happens whenever a DAG gets updated or a new DAG is added
**Potential fix:** Adding the code that was removed in PR [#30340](https://github.com/apache/airflow/pull/30340) back to `airflow/models/dagbag.py` fixes the issue. I've tried it on the current main branch using Breeze.
### What you think should happen instead
Permissions set in Web UI stay whenever a DAG sync happens
### How to reproduce
1. Download `docker-compose.yaml`:
```bash
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.6.2/docker-compose.yaml'
```
2. Create dirs and set the right Airflow user:
```bash
mkdir -p ./dags ./logs ./plugins ./config && \
echo -e "AIRFLOW_UID=$(id -u)" > .env
```
4. Add `test_dag.py` to ./dags:
```python
import datetime
import pendulum
from airflow import DAG
from airflow.operators.bash import BashOperator
with DAG(
dag_id="test",
schedule="0 0 * * *",
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
dagrun_timeout=datetime.timedelta(minutes=60),
) as dag:
test = BashOperator(
task_id="test",
bash_command="echo 1",
)
if __name__ == "__main__":
dag.test()
```
5. Run docker compose: `docker compose up`
6. Create role in Web UI: Security > List Roles > Add a new record:
Name: test
Permissions: `can read on DAG:test`
7. Update `test_dag.py`: change `bash_command="echo 1"` to `bash_command="echo 2"`
8. Check test role's permissions: `can read on DAG:test` will be removed
Another option is to add a new dag instead of changing the existing one:
6. Add another dag to ./dags, code doesn't matter
7. Restart scheduler: `docker restart [scheduler container name]`
9. Check test role's permissions: `can read on DAG:test` will be removed
### Operating System
Ubuntu 22.04.1 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Docker 24.0.2
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32839 | https://github.com/apache/airflow/pull/33632 | 83efcaa835c4316efe2f45fd9cfb619295b25a4f | 370348a396b5ddfe670e78ad3ab87d01f6d0107f | "2023-07-25T19:13:12Z" | python | "2023-08-24T19:20:13Z" |
Subsets and Splits