status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 32,708 | ["Dockerfile", "Dockerfile.ci", "docs/docker-stack/build-arg-ref.rst", "docs/docker-stack/changelog.rst", "scripts/docker/install_mysql.sh"] | MYSQL_OPT_RECONNECT is deprecated. When exec airflow db upgrade. | ### Apache Airflow version
2.6.3
### What happened
When I install airflow can set backend database, I set mysql as my backend. And I exec `airflow db upgrade`
It shows many warning info contains "WARNING: MYSQL_OPT_RECONNECT is deprecated and will be removed in a future version."
### What you think should happen instead
_No response_
### How to reproduce
mysql_config --version
8.0.34
mysql --version
mysql Ver 8.0.34 for Linux on x86_64 (MySQL Community Server - GPL)
setup airflow backend and run `airflow db upgrade`
### Operating System
CentOS 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32708 | https://github.com/apache/airflow/pull/35070 | dcb72b5a4661223c9de7beea40264a152298f24b | 1f26ae13cf974a0b2af6d8bc94c601d65e2bd98a | "2023-07-20T07:13:49Z" | python | "2023-10-24T08:54:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,702 | ["airflow/providers/amazon/aws/operators/sagemaker.py", "docs/apache-airflow-providers-amazon/operators/sagemaker.rst", "tests/providers/amazon/aws/operators/test_sagemaker_notebook.py", "tests/system/providers/amazon/aws/example_sagemaker_notebook.py"] | Support for SageMaker Notebook Operators | ### Description
Today, Amazon provider package supports SageMaker operators for a few operations, like training, tuning, pipelines, but it lacks the support for SageMaker Notebook instances. Boto3 provides necessary APIs to [create notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/create_notebook_instance.html), [start notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/start_notebook_instance.html), [stop network instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/stop_notebook_instance.html) and [delete notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/delete_notebook_instance.html). Leveraging these APIs, we should add new operators to SageMaker set under Amazon provider. At the same time, having a sensor (synchronous as well as deferrable) for notebook instance execution that utilizes [describe notebook instance](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker/client/describe_notebook_instance.html) and waits for Stopped/Failed status would help with observability of the execution.
### Use case/motivation
Data Scientists are orchestrating ML use cases via Apache Airflow. A key component of ML use cases is running Jupyter Notebook on SageMaker. Having built-in operators and sensors would make it easy for Airflow users to run Notebook instances on SageMaker.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32702 | https://github.com/apache/airflow/pull/33219 | 45d5f6412731f81002be7e9c86c11060394875cf | 223b41d68f53e7aa76588ffb8ba1e37e780d9e3b | "2023-07-19T19:27:24Z" | python | "2023-08-16T16:53:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,657 | ["airflow/migrations/versions/0131_2_8_0_make_connection_login_password_text.py", "airflow/models/connection.py", "airflow/utils/db.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst"] | Increase connections HTTP login length to 5000 characters | ### Description
The current length limit for the `login` parameter in an HTTP connection is 500 characters. It'd be nice if this was 5000 characters like the `password` parameter.
### Use case/motivation
We've run into an issue with an API we need to integrate with. It uses basic HTTP authentication, and both username and password are about 900 characters long each. We don't have any control over this API, so we cannot change the authentication method, nor the length of these usernames and passwords.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32657 | https://github.com/apache/airflow/pull/32815 | a169cf2c2532a8423196c8d98eede86029a9de9a | 8e38c5a4d74b86af25b018b19f7a7d90d3e7610f | "2023-07-17T17:20:44Z" | python | "2023-09-26T17:00:36Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,622 | ["airflow/decorators/base.py", "tests/decorators/test_python.py"] | When multiple-outputs gets None as return value it crashes | ### Body
Currently when you use multiple-outputs in decorator and it gets None value, it crashes.
As explained in https://github.com/apache/airflow/issues/32553 and workaround for ShortCircuitOperator has been implemented here: https://github.com/apache/airflow/pull/32569
But a more complete fix for multiple-outputs handling None is needed.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32622 | https://github.com/apache/airflow/pull/32625 | ea0deaa993674ad0e4ef777d687dc13809b0ec5d | a5dd08a9302acca77c39e9552cde8ef501fd788f | "2023-07-15T07:25:42Z" | python | "2023-07-16T14:31:32Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,621 | ["docs/apache-airflow-providers-apache-beam/operators.rst"] | Apache beam operators that submits to Dataflow requires gcloud CLI | ### What do you see as an issue?
It's unclear in the [apache beam provider documentation](https://airflow.apache.org/docs/apache-airflow-providers-apache-beam/stable/index.html) about [Apache Beam Operators](https://airflow.apache.org/docs/apache-airflow-providers-apache-beam/stable/operators.html) that the operators require gcloud CLI.
For example, `BeamRunPythonPipelineOperator` calls [provide_authorized_gcloud](https://github.com/apache/airflow/blob/providers-apache-beam/5.1.1/airflow/providers/apache/beam/operators/beam.py#L303C41-L303C66) which executes a [bash command that uses gcloud](https://github.com/apache/airflow/blob/main/airflow/providers/google/common/hooks/base_google.py#L545-L552).
### Solving the problem
A callout box in the apache beam provider documentation would be very helpful.
Something like this [callout](https://airflow.apache.org/docs/apache-airflow-providers-google/10.3.0/operators/cloud/dataflow.html) in the google provider documentation.
```
This operator requires gcloud command (Google Cloud SDK) must be installed on the Airflow worker <https://cloud.google.com/sdk/docs/install>`__
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32621 | https://github.com/apache/airflow/pull/32663 | f6bff828af28a9f7f25ef35ec77da4ca26388258 | 52d932f659d881a0b17bc1c1ba7e7bfd87d45848 | "2023-07-15T07:13:45Z" | python | "2023-07-18T11:22:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,590 | ["chart/templates/_helpers.yaml", "chart/templates/secrets/metadata-connection-secret.yaml", "chart/templates/workers/worker-kedaautoscaler.yaml", "chart/values.schema.json", "chart/values.yaml", "helm_tests/other/test_keda.py"] | When using KEDA and pgbouncer together, KEDA logs repeated prepared statement errors | ### Official Helm Chart version
1.10.0 (latest released)
### Apache Airflow version
2.6.2
### Kubernetes Version
v1.26.5-gke.1200
### Helm Chart configuration
values.pgbouncer.enabled: true
workers.keda.enabled: true
And configure a postgres database of any sort.
### Docker Image customizations
_No response_
### What happened
If KEDA is enabled in the helm chart, and pgbouncer is also enabled, KEDA will be configured to use the connection string from the worker pod to connect to the postgres database. That means it will connect to pgbouncer. Pgbouncer is configured in transaction pool mode according to the secret:
[pgbouncer]
pool_mode = transaction
And it appears that KEDA uses prepared statements in it's queries to postgres, resulting in numerous repeated errors in the KEDA logs:
```
2023-07-13T18:21:35Z ERROR postgresql_scaler could not query postgreSQL: ERROR: prepared statement "stmtcache_47605" does not exist (SQLSTATE 26000) {"type": "ScaledObject", "namespace": "airflow-sae-int", "name": "airflow-sae-int-worker", "error": "ERROR: prepared statement \"stmtcache_47605\" does not exist (SQLSTATE 26000)"}
```
Now KEDA still works, as it does the query again without the prepared statement, but this is not ideal and results in a ton of error logging.
### What you think should happen instead
I suggest having the KEDA connection go direct to the upstream configured postgresql server, as it's only one connection, instead of using pgbouncer.
### How to reproduce
Enabled KEDA for workers, and pgbouncer at the same time.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32590 | https://github.com/apache/airflow/pull/32608 | 51052bbbce159340e962e9fe40b6cae6ce05ab0c | f7ad549f2d7119a6496e3e66c43f078fbcc98ec1 | "2023-07-13T18:25:32Z" | python | "2023-07-15T20:52:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,551 | ["airflow/providers/snowflake/operators/snowflake.py", "tests/providers/snowflake/operators/test_snowflake.py"] | SnowflakeValueCheckOperator - database, warehouse, schema parameters doesn't ovveride connection | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We are using Airflow 2.6.0 with Airflow Snowflake Provider 4.3.0.
When we add database, schema and warehouse parameters to SnowflakeOperator all are overriding extra part of Snowflake connection definition. Same set of parameters in SnowflakeValueCheckOperator none of parameter is overriden.
### What you think should happen instead
When we go through Snowflake Provider source code we found, that for SnowflakeOperator hooks_params are created before parent class init. it is looked like:
```
if any([warehouse, database, role, schema, authenticator, session_parameters]):
hook_params = kwargs.pop("hook_params", {})
kwargs["hook_params"] = {
"warehouse": warehouse,
"database": database,
"role": role,
"schema": schema,
"authenticator": authenticator,
"session_parameters": session_parameters,
**hook_params,
}
super().__init__(conn_id=snowflake_conn_id, **kwargs)
```
For SnowflakeValueCheckOperator parent class init is added before initialization of class arguments:
```
super().__init__(sql=sql, parameters=parameters, conn_id=snowflake_conn_id, **kwargs)
self.snowflake_conn_id = snowflake_conn_id
self.sql = sql
self.autocommit = autocommit
self.do_xcom_push = do_xcom_push
self.parameters = parameters
self.warehouse = warehouse
self.database = database
```
Probably hook that is used in SnowflakeValueCheckOperator (and probably in the rest of classes) is initiated base on connection values and overriding is not working.
### How to reproduce
We should create connection with different database and warehouse than TEST_DB and TEST_WH. Table dual should exist only in TEST_DB.TEST_SCHEMA and not exists in connection db/schema.
```
from pathlib import Path
from datetime import timedelta, datetime
from time import time, sleep
from airflow import DAG
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
from airflow.providers.snowflake.operators.snowflake import SnowflakeValueCheckOperator
warehouse = 'TEST_WH'
database ='TEST_DB'
schema = 'TEST_SCHEMA'
args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': True,
'email_on_retry': False,
'start_date': pendulum.now(tz='Europe/Warsaw').add(months=-1),
'retries': 0,
'concurrency': 10,
'dagrun_timeout': timedelta(hours=24)
}
with DAG(
dag_id=dag_id,
template_undefined=jinja2.Undefined,
default_args=args,
description='Sequence ' + sequence_id,
schedule=schedule,
max_active_runs=10,
catchup=False,
tags=tags
) as dag:
value_check_task = SnowflakeValueCheckOperator(
task_id='value_check_task',
sql='select 1 from dual',
snowflake_conn_id ='con_snowflake_zabka',
warehouse=warehouse,
database=database,
schema=schema,
pass_value=1
)
snowflake_export_data_task = SnowflakeOperator(
task_id='snowflake_export_data_task',
snowflake_conn_id='con_snowflake',
sql=f"select 1 from dual",
warehouse=warehouse,
database=database,
schema=schema
)
```
### Operating System
Ubuntu 20.04.5 LTS
### Versions of Apache Airflow Providers
apache-airflow 2.6.0
apache-airflow-providers-celery 3.1.0
apache-airflow-providers-common-sql 1.4.0
apache-airflow-providers-ftp 3.3.1
apache-airflow-providers-http 4.3.0
apache-airflow-providers-imap 3.1.1
apache-airflow-providers-microsoft-azure 6.1.1
apache-airflow-providers-odbc 3.2.1
apache-airflow-providers-oracle 3.6.0
apache-airflow-providers-postgres 5.4.0
apache-airflow-providers-redis 3.1.0
apache-airflow-providers-snowflake 4.3.0
apache-airflow-providers-sqlite 3.3.2
apache-airflow-providers-ssh 3.6.0
### Deployment
Virtualenv installation
### Deployment details
Python 3.9.5
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32551 | https://github.com/apache/airflow/pull/32605 | 2b0d88e450f11af8e447864ca258142a6756126d | 2ab78ec441a748ae4d99e429fe336b80a601d7b1 | "2023-07-12T11:00:55Z" | python | "2023-07-31T19:21:00Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,503 | ["airflow/www/utils.py", "tests/www/views/test_views_tasks.py"] | execution date is missing from Task Instance tooltip | ### Apache Airflow version
main (development)
### What happened
It seems [this](https://github.com/apache/airflow/commit/e16207409998b38b91c1f1697557d5c229ed32d1) commit has made the task instance execution date disappear from the task instance tooltip completely:

Note the missing `Run: <execution date>` between Task_id and Run_id.
I think there's a problem with the task instance execution date, because it's always `undefined`. In an older version of Airflow (2.4.3), I can see that the tooltip always shows the **current** datetime instead of the actual execution date, which is what the author of the commit identified in the first place I think.
### What you think should happen instead
The tooltip should properly show the task instance's execution date, not the current datetime (or nothing). There's a deeper problem here that causes `ti.execution_date` to be `undefined`.
### How to reproduce
Run the main branch of Airflow, with a simple DAG that finishes a run successfully. Go to the Graph view of a DAG and hover over any completed task with the mouse.
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32503 | https://github.com/apache/airflow/pull/32527 | 58e21c66fdcc8a416a697b4efa852473ad8bd6fc | ed689f2be90cc8899438be66e3c75c3921e156cb | "2023-07-10T21:02:35Z" | python | "2023-07-25T06:53:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,499 | ["airflow/providers/google/cloud/hooks/dataproc.py", "airflow/providers/google/cloud/operators/dataproc.py", "tests/providers/google/cloud/hooks/test_dataproc.py", "tests/providers/google/cloud/operators/test_dataproc.py"] | Add filtering to DataprocListBatchesOperator | ### Description
The Python Google Cloud Dataproc API version has been updated in Airflow to support filtering on the Dataproc Batches API. The DataprocListBatchesOperator can be updated to make use of this filtering.
Currently, DataprocListBatchesOperator returns, and populates xcom with every job run in the project. This almost surely will fail as the return object is large and xcom storage is low, especially with MySQL. Filtering on job prefix and create_time are much more useful capabilities.
### Use case/motivation
The ability to filter lists of GCP Dataproc Batches jobs.
### Related issues
None
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32499 | https://github.com/apache/airflow/pull/32500 | 3c14753b03872b259ce2248eda92f7fb6f4d751b | 99b8a90346b8826756ac165b73464a701e2c33aa | "2023-07-10T19:47:11Z" | python | "2023-07-20T18:24:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,491 | ["BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/release_management_commands.py", "dev/breeze/src/airflow_breeze/commands/release_management_commands_config.py", "dev/breeze/src/airflow_breeze/utils/add_back_references.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_release-management.svg", "images/breeze/output_release-management_add-back-references.svg", "images/breeze/output_setup_check-all-params-in-groups.svg", "images/breeze/output_setup_regenerate-command-images.svg"] | Implement `breeze publish-docs` command | ### Body
We need a small improvement for our docs publishing process.
We currently have those two scripts:
* [x] docs/publish_docs.py https://github.com/apache/airflow/blob/main/docs/publish_docs.py in airflow repo
* [ ] post-docs/ in airflow-site https://github.com/apache/airflow-site/blob/main/post-docs/add-back-references.py
We have currently the steps that are describing how to publish the documentation in our release documentation:
* https://github.com/apache/airflow/blob/main/dev/README_RELEASE_AIRFLOW.md
* https://github.com/apache/airflow/blob/main/dev/README_RELEASE_PROVIDER_PACKAGES.md
* https://github.com/apache/airflow/blob/main/dev/README_RELEASE_HELM_CHART.md
This is the "Publish documentation" chapter
They currently consists of few steps:
1) checking out the main in "airflow-sites"
2) setting the AIRFLOW_SITE_DIRECTORY env variable to the checked out repo
3) building docs (with `breeze build-docs`)
4) Running publish_docs.py scripts in docs that copies the generated docs to "AIRFLOW_SITE_DIRECTORY"
5) **I just added those** running post-docs post-processing for back references
6) Commiting the changes and pushing them to airflow-site
(there are few variants of those depends what docs you are building).
The problem with that is that it requires several venvs to setup independently (and they might sometimes miss stuff) and those commands are distributed across repositories.
The goal of the change is to replace publish + post-docs with single, new breeze command - similarly as we have "build-docs" now.
I imagine this command should be similar to:
```
breeze publish-docs --airflow-site-directory DIRECTORY --package-filter .... and the rest of other arguments that publish_docs.py has
```
This command should copy the files and run post-processing on back-references (depending which package documentation we publish).
Then the process of publish docs should like:
1) checking out the main in "airflow-sites"
2) setting the AIRFLOW_SITE_DIRECTORY env variable to the checked out repo
3) building docs (with `breeze build-docs`)
4) publishing docs (with `breeze publish-docs`)
5) Commiting the changes and pushing them to airflow-site
The benefits:
* no separate venvs to manage (all done in breeze's env) - automatically manged
* nicer integration in our dev/CI environment
* all code for publishing docs in one place - in breeze
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32491 | https://github.com/apache/airflow/pull/32594 | 1a1753c7246a2b35b993aad659f5551afd7e0215 | 945f48a1fdace8825f3949e5227bed0af2fd38ff | "2023-07-10T13:20:05Z" | python | "2023-07-14T16:36:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,412 | ["setup.py"] | Click 8.1.4 breaks mypy typing checks | ### Body
The Click 8.1.4 released 06.06.2023 broke a number of mypy checks. Until the problem is fixed, we need to limit click to unbreak main.
Example failing job: https://github.com/apache/airflow/actions/runs/5480089808/jobs/9983219862
Example failures:
```
dev/breeze/src/airflow_breeze/utils/common_options.py:78: error: Need type
annotation for "option_verbose" [var-annotated]
option_verbose = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:89: error: Need type
annotation for "option_dry_run" [var-annotated]
option_dry_run = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:100: error: Need type
annotation for "option_answer" [var-annotated]
option_answer = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:109: error: Need type
annotation for "option_github_repository" [var-annotated]
option_github_repository = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:118: error: Need type
annotation for "option_python" [var-annotated]
option_python = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:127: error: Need type
annotation for "option_backend" [var-annotated]
option_backend = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:136: error: Need type
annotation for "option_integration" [var-annotated]
option_integration = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:142: error: Need type
annotation for "option_postgres_version" [var-annotated]
option_postgres_version = click.option(
^
dev/breeze/src/airflow_breeze/utils/common_options.py:150: error: Need type
annotation for "option_mysql_version" [var-annotated]
option_mysql_version = click.option(
^
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/32412 | https://github.com/apache/airflow/pull/32634 | 7092cfdbbfcfd3c03909229daa741a5bcd7ccc64 | 7123dc162bb222fdee7e4c50ae8a448c43cdd7d3 | "2023-07-06T21:54:23Z" | python | "2023-07-20T04:30:54Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,367 | ["airflow/api_connexion/endpoints/xcom_endpoint.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/xcom_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_xcom_endpoint.py", "tests/api_connexion/schemas/test_xcom_schema.py", "tests/conftest.py"] | Unable to get mapped task xcom value via REST API. Getting MultipleResultsFound error | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.3.4 (but actual code seems to have same behaviour).
I have mapped task with xcom value.
I want to get xcom value of particular instance or xcom values of all task instances.
I am using standard REST API method /dags/{dag_id}/dagRuns/{dag_run_id}/taskInstances/{task_id}/xcomEntries/{xcom_key}
And it throws an error
`
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2850, in one_or_none
return self._iter().one_or_none()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/result.py", line 1510, in one_or_none
return self._only_one_row(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/result.py", line 614, in _only_one_row
raise exc.MultipleResultsFound(
sqlalchemy.exc.MultipleResultsFound: Multiple rows were found when one or none was required
`
Is it any way of getting xcom of mapped tasks via API?
### What you think should happen instead
_No response_
### How to reproduce
Make dag with mapped task. Return xcom value in every task. Try to get xcom value via API.
### Operating System
ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Standard
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32367 | https://github.com/apache/airflow/pull/32453 | 2aa3cfb6abd10779029b0c072493a1c1ed820b77 | bc97646b262e7f338b4f3d4dce199e640e24e250 | "2023-07-05T10:16:02Z" | python | "2023-07-10T08:34:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,330 | ["airflow/providers/amazon/aws/hooks/glue_crawler.py", "tests/providers/amazon/aws/hooks/test_glue_crawler.py"] | AWS GlueCrawlerOperator deletes existing tags on run | ### Apache Airflow version
2.6.2
### What happened
We are currently on AWS Provider 6.0.0 and looking to upgrade to the latest version 8.2.0. However, there are some issues with the GlueCrawlerOperator making the upgrade challenging, namely that the operator attempts to update the crawler tags on every run. Because we manage our resource tagging through Terraform, we do not provide any tags to the operator, which results in all of the tags being deleted (as well as needing additional `glue:GetTags` and `glue:UntagResource` permissions needing to be added to relevant IAM roles to even run the crawler).
It seems strange that the default behaviour of the operator has been changed to make modifications to infrastructure, especially as this differs from the GlueJobOperator, which only performs updates when certain parameters are set. Potentially something similar could be done here, where if no `Tags` key is present in the `config` dict they aren't modified at all. Not sure what the best approach is.
### What you think should happen instead
The crawler should run without any alterations to the existing infrastructure
### How to reproduce
Run a GlueCrawlerOperator without tags in config, against a crawler with tags present
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
Amazon 8.2.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32330 | https://github.com/apache/airflow/pull/32331 | 9a0f41ba53185031bc2aa56ead2928ae4b20de99 | 7a3bc8d7c85448447abd39287ef6a3704b237a90 | "2023-07-03T13:40:23Z" | python | "2023-07-06T11:09:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,301 | ["airflow/serialization/pydantic/dataset.py"] | = instead of : in type hints - failing Pydantic 2 | ### Apache Airflow version
2.6.2
### What happened
airflow doesn't work correct UPDATE: with Pydantic 2 released on 30th of June UPDATE:, raises
`pydantic.errors.PydanticUserError: A non-annotated attribute was detected: `dag_id = <class 'str'>`. All model fields require a type annotation; if `dag_id` is not meant to be a field, you may be able to resolve this error by annotating it as a `ClassVar` or updating `model_config['ignored_types']`.`
### What you think should happen instead
_No response_
### How to reproduce
just install apache-airflow and run `airflow db init`
### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32301 | https://github.com/apache/airflow/pull/32307 | df4c8837d022e66921bc0cf33f3249b235de6fdd | 4d84e304b86c97d0437fddbc6b6757b5201eefcc | "2023-07-01T12:00:53Z" | python | "2023-07-01T21:41:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,294 | ["airflow/models/renderedtifields.py"] | K8 executor throws MySQL DB error 'Deadlock found when trying to get lock; try restarting transaction' | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Apache Airflow version: 2.6.1
When multiple runs for a dag executing simultaneously, K8 executor fails with the following MySQL exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 1407, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 1534, in _execute_task_with_callbacks
RenderedTaskInstanceFields.write(rtif)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 75, in wrapper
with create_session() as session:
File "/usr/local/lib/python3.10/contextlib.py", line 142, in __exit__
next(self.gen)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 37, in create_session
session.commit()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1454, in commit
self._transaction.commit(_to_root=self.future)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 832, in commit
self._prepare_impl()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 811, in _prepare_impl
self.session.flush()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3449, in flush
self._flush(objects)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3588, in _flush
with util.safe_reraise():
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3549, in _flush
flush_context.execute()
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
util.preloaded.orm_persistence.save_obj(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 237, in save_obj
_emit_update_statements(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1001, in _emit_update_statements
c = connection._execute_20(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1710, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1948, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2129, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1905, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.10/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.10/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
SQL: UPDATE rendered_task_instance_fields SET k8s_pod_yaml=%s WHERE rendered_task_instance_fields.dag_id = %s AND rendered_task_instance_fields.task_id = %s AND rendered_task_instance_fields.run_id = %s AND rendered_task_instance_fields.map_index = %s
### What you think should happen instead
K8 executor should complete processing successfully without error
### How to reproduce
Trigger multiple runs of the same dag simultaneously so that tasks under the dag get executed around the same time
### Operating System
Airflow docker image tag 2.6.1-python3.10
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
User community airflow-helm chart
https://github.com/airflow-helm
### Anything else
This occurs consistently. If multiple runs for the dag are executed with a delay of few minutes, K8 executor completes successfully
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32294 | https://github.com/apache/airflow/pull/32341 | e53320d62030a53c6ffe896434bcf0fc85803f31 | c8a3c112a7bae345d37bb8b90d68c8d6ff2ef8fc | "2023-06-30T22:51:45Z" | python | "2023-07-05T11:28:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,290 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Try number is incorrect | ### Apache Airflow version
2.6.2
### What happened
All tasks were run 1 time. The try number is 2 for all tasks
### What you think should happen instead
Try number should be 1 if only tried 1 time
### How to reproduce
Run a task and use the UI to look up try number
### Operating System
centos
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32290 | https://github.com/apache/airflow/pull/32361 | 43f3e57bf162293b92154f16a8ce33e6922fbf4e | a8e4b8aee602e8c672ab879b7392a65b5c2bb34e | "2023-06-30T16:45:01Z" | python | "2023-07-05T08:30:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,283 | ["airflow/models/dagrun.py", "tests/models/test_dagrun.py"] | EmptyOperator in dynamically mapped TaskGroups does not respect upstream dependencies | ### Apache Airflow version
2.6.2
### What happened
When using an EmptyOperator in dynamically mapped TaskGroups (https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dynamic-task-mapping.html#mapping-over-a-task-group), the EmptyOperator of all branches starts as soon as the first upstream task dependency of the EmptyOperator **in any branch** completes. This causes downstream tasks of the EmptyOperator to start prematurely in all branches, breaking depth-first execution of the mapped TaskGroup.
I have provided a test for this behavior below, by introducing an artificial wait time in a `variable_task`, followed by an `EmptyOperator` in `checkpoint` and a `final` dependent task .

Running this test, during the execution I see this: The `checkpoint` and `final` tasks are already complete, while the upstream `variable_task` in the group is still running.

I have measured the difference of time when of each the branches' `final` tasks execute, and compared them, to cause a failure condition, which you can see failing here in the `assert_branch_waited` task.
By using just a regular Task, one gets the correct behavior.
### What you think should happen instead
In each branch separately, the `EmptyOperator` should wait for its dependency to complete, before it starts. This would be the same behavior as using a regular `Task` for `checkpoint`.
### How to reproduce
Here are test cases in two dags, one with an `EmptyOperator`, showing incorrect behavior, one with a `Task` in sequence instead of the `EmptyOperator`, that has correct behavior.
```python
import time
from datetime import datetime
from airflow import DAG
from airflow.decorators import task, task_group
from airflow.models import TaskInstance
from airflow.operators.empty import EmptyOperator
branches = [1, 2]
seconds_difference_expected = 10
for use_empty_operator in [False, True]:
dag_id = "test-mapped-group"
if use_empty_operator:
dag_id += "-with-emptyoperator"
else:
dag_id += "-no-emptyoperator"
with DAG(
dag_id=dag_id,
schedule=None,
catchup=False,
start_date=datetime(2023, 1, 1),
default_args={"retries": 0},
) as dag:
@task_group(group_id="branch_run")
def mapped_group(branch_number):
"""Branch 2 will take > `seconds_difference_expected` seconds, branch 1 will be immediately executed"""
@task(dag=dag)
def variable_task(branch_number):
"""Waits `seconds_difference_expected` seconds for branch 2"""
if branch_number == 2:
time.sleep(seconds_difference_expected)
return branch_number
variable_task_result = variable_task(branch_number)
if use_empty_operator:
# emptyoperator as a checkpoint
checkpoint_result = EmptyOperator(task_id="checkpoint")
else:
@task
def checkpoint():
pass
checkpoint_result = checkpoint()
@task(dag=dag)
def final(ti: TaskInstance = None):
"""Return the time at the task execution"""
return datetime.now()
final_result = final()
variable_task_result >> checkpoint_result >> final_result
return final_result
@task(dag=dag)
def assert_branch_waited(times):
"""Check that the difference of the start times of the final step in each branch
are at least `seconds_difference_expected`, i.e. the branch waited for all steps
"""
seconds_difference = abs(times[1] - times[0]).total_seconds()
if not seconds_difference >= seconds_difference_expected:
raise RuntimeError(
"Branch 2 completed too fast with respect to branch 1: "
+ f"observed [seconds difference]: {seconds_difference}; "
+ f"expected [seconds difference]: >= {seconds_difference_expected}"
)
mapping_results = mapped_group.expand(branch_number=branches)
assert_branch_waited(mapping_results)
```
### Operating System
Debian GNU/Linux 11 (bullseye) on docker (official image)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32283 | https://github.com/apache/airflow/pull/32354 | a8e4b8aee602e8c672ab879b7392a65b5c2bb34e | 7722b6f226e9db3a89b01b89db5fdb7a1ab2256f | "2023-06-30T11:22:15Z" | python | "2023-07-05T08:38:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,280 | ["airflow/providers/amazon/aws/hooks/redshift_data.py", "airflow/providers/amazon/aws/operators/redshift_data.py", "tests/providers/amazon/aws/hooks/test_redshift_data.py", "tests/providers/amazon/aws/operators/test_redshift_data.py"] | RedshiftDataOperator: Add support for Redshift serverless clusters | ### Description
This feature adds support for Redshift Serverless clusters for the given operator.
### Use case/motivation
RedshiftDataOperator currently only supports provisioned clusters since it has the capability of adding `ClusterIdentifier` as a parameter but not `WorkgroupName`. The addition of this feature would help users to use this operator for their serverless cluster as well.
### Related issues
None
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32280 | https://github.com/apache/airflow/pull/32785 | d05e42e5d2081909c9c33de4bd4dfb759ac860c1 | 8012c9fce64f152b006f88497d65ea81d29571b8 | "2023-06-30T08:51:53Z" | python | "2023-07-24T17:09:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,279 | ["airflow/api/common/airflow_health.py", "airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/health_schema.py", "airflow/www/static/js/types/api-generated.ts", "docs/apache-airflow/administration-and-deployment/logging-monitoring/check-health.rst", "tests/api/common/test_airflow_health.py"] | Add DagProcessor status to health endpoint. | ### Description
Add DagProcessor status including latest heartbeat to health endpoint similar to Triggerer status added recently. Related PRs.
https://github.com/apache/airflow/pull/31529
https://github.com/apache/airflow/pull/27755
### Use case/motivation
It helps in dag processor monitoring
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32279 | https://github.com/apache/airflow/pull/32382 | bb97bf21fd320c593b77246590d4f8d2b0369c24 | b3db4de4985eccb859a30a07a2350499370c6a9a | "2023-06-30T08:42:33Z" | python | "2023-07-06T23:10:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,260 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Apparently the Jinja template does not work when using dynamic task mapping with SQLExecuteQueryOperator | ### Apache Airflow version
2.6.2
### What happened
We are trying to use dynamic task mapping with SQLExecuteQueryOperator on Trino. Our use case is to expand the sql parameter to the operator by calling some SQL files.
Without dynamic task mapping it works perfectly, but when used with the dynamic task mapping, it is unable to recognize the Path, and instead tries to execute the path as query.
I believe it has some relation with the template_searchpath parameter.
### What you think should happen instead
It should have worked similar with or without dynamic task mapping.
### How to reproduce
Deployed the following DAG in Airflow
```
from airflow.models import DAG
from datetime import datetime, timedelta
from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
DEFAULT_ARGS = {
'start_date': datetime(2023, 7, 16),
}
with DAG (dag_id= 'trino_dinamic_map',
template_searchpath = '/opt/airflow',
description = "Esta é um dag para o projeto exemplo",
schedule = None,
default_args = DEFAULT_ARGS,
catchup=False,
) as dag:
trino_call = SQLExecuteQueryOperator(
task_id= 'trino_call',
conn_id='con_id',
sql = 'queries/insert_delta_dp_raw_table1.sql',
handler=list
)
trino_insert = SQLExecuteQueryOperator.partial(
task_id="trino_insert_table",
conn_id='con_id',
handler=list
).expand_kwargs([{'sql': 'queries/insert_delta_dp_raw_table1.sql'}, {'sql': 'queries/insert_delta_dp_raw_table2.sql'}, {'sql': 'queries/insert_delta_dp_raw_table3.sql'}])
trino_call >> trino_insert
```
In the sql file it can be any query, for the test I used a create table. Queries are located in /opt/airflow/queries
```
CREATE TABLE database_config.data_base_name.TABLE_NAME (
"JOB_NAME" VARCHAR(60) NOT NULL,
"JOB_ID" DECIMAL NOT NULL,
"JOB_STATUS" VARCHAR(10),
"JOB_STARTED_DATE" VARCHAR(10),
"JOB_STARTED_TIME" VARCHAR(10),
"JOB_FINISHED_DATE" VARCHAR(10),
"JOB_FINISHED_TIME" VARCHSAR(10)
)
```
task_1 (without dynamic task mapping) completes successfully, while task_2(with dynamic task mapping) fails.
Looking at the error logs, there was a failure when executing the query, not recognizing the query content but the path.
Here is the traceback:
trino.exceptions.TrinoUserError: TrinoUserError(type=USER_ERROR, name=SYNTAX_ERROR, message="line 1:1: mismatched input 'queries'. Expecting: 'ALTER', 'ANALYZE', 'CALL', 'COMMENT', 'COMMIT', 'CREATE', 'DEALLOCATE', 'DELETE', 'DENY', 'DESC', 'DESCRIBE', 'DROP', 'EXECUTE', 'EXPLAIN', 'GRANT', 'INSERT', 'MERGE', 'PREPARE', 'REFRESH', 'RESET', 'REVOKE', 'ROLLBACK', 'SET', 'SHOW', 'START', 'TRUNCATE', 'UPDATE', 'USE', <query>", query_id=20230629_114146_04418_qbcnd)
### Operating System
Red Hat Enterprise Linux 8.8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32260 | https://github.com/apache/airflow/pull/32272 | 58eb19fe7669b61d0a00bcc82df16adee379a233 | d1e6a5c48d03322dda090113134f745d1f9c34d4 | "2023-06-29T12:31:44Z" | python | "2023-08-18T19:17:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,227 | ["airflow/providers/amazon/aws/hooks/lambda_function.py", "airflow/providers/amazon/aws/operators/lambda_function.py", "tests/providers/amazon/aws/hooks/test_lambda_function.py", "tests/providers/amazon/aws/operators/test_lambda_function.py"] | LambdaInvokeFunctionOperator expects wrong type for payload arg | ### Apache Airflow version
2.6.2
### What happened
I instantiate LambdaInvokeFunctionOperator in my DAG.
I want to call the lambda function with some payload. After following code example from official documentation, I created a dict, and passed its json string version to the operator:
```
d = {'key': 'value'}
invoke_lambda_task = LambdaInvokeFunctionOperator(..., payload=json.dumps(d))
```
When I executed the DAG, this task failed. I got the following error message:
```
Invalid type for parameter Payload, value: {'key': 'value'}, type: <class 'dict'>, valid types: <class 'bytes'>, <class 'bytearray'>, file-like object
```
Then I went to official boto3 documentation, and found out that indeed, the payload parameter type is `bytes` or file. See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda/client/invoke.html
### What you think should happen instead
To preserve backward compatibility, The API should encode payload argument if it is str, but also accept bytes or file, in which case it will be passed-through as is.
### How to reproduce
1. Create lambda function in AWS
2. Create a simple DAG with LambdaInvokeFunctionOperator
3. pass str value in the payload parameter; use json.dumps() with a simple dict, as lambda expects json payload
4. Run the DAG; the task is expected to fail
### Operating System
ubuntu
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==7.3.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32227 | https://github.com/apache/airflow/pull/32259 | 88da71ed1fdffc558de28d5c3fd78e5ae1ac4e8c | 5c72befcfde63ade2870491cfeb708675399d9d6 | "2023-06-28T09:13:24Z" | python | "2023-07-03T06:45:24Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,215 | ["airflow/providers/google/cloud/operators/dataproc.py"] | DataprocCreateBatchOperator in deferrable mode doesn't reattach with deferment. | ### Apache Airflow version
main (development)
### What happened
The DataprocCreateBatchOperator (Google provider) handles the case when a batch_id already exists in the Dataproc API by 'reattaching' to a potentially running job.
Current reattachment logic uses the non-deferrable method even when the operator is in deferrable mode.
### What you think should happen instead
The operator should reattach in deferrable mode.
### How to reproduce
Create a DAG with a task of DataprocCreateBatchOperator that is long running. Make DataprocCreateBatchOperator deferrable in the constructor.
Restart local Airflow to simulate having to 'reattach' to a running job in Google Cloud Dataproc.
The operator resumes using the running job but in the code path for the non-derferrable logic.
### Operating System
macOS 13.4.1 (22F82)
### Versions of Apache Airflow Providers
Current main.
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32215 | https://github.com/apache/airflow/pull/32216 | f2e2125b070794b6a66fb3e2840ca14d07054cf2 | 7d2ec76c72f70259b67af0047aa785b28668b411 | "2023-06-27T20:09:11Z" | python | "2023-06-29T13:51:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,203 | ["airflow/auth/managers/fab/views/roles_list.py", "airflow/www/fab_security/manager.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Role views | Move role related views to FAB Auth manager:
- List roles
- Edit role
- Create role
- View role | https://github.com/apache/airflow/issues/32203 | https://github.com/apache/airflow/pull/33043 | 90fb482cdc6a6730a53a82ace49d42feb57466e5 | 5707103f447be818ad4ba0c34874b822ffeefc09 | "2023-06-27T18:16:54Z" | python | "2023-08-10T14:15:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,202 | ["airflow/auth/managers/fab/views/user.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/auth/managers/fab/views/user_edit.py", "airflow/auth/managers/fab/views/user_stats.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"] | AIP-56 - FAB AM - User views | Move user related views to FAB Auth manager:
- List users
- Edit user
- Create user
- View user | https://github.com/apache/airflow/issues/32202 | https://github.com/apache/airflow/pull/33055 | 2d7460450dda5cc2f20d1e8cd9ead9e4d1946909 | 66254e42962f63d6bba3370fea40e082233e153d | "2023-06-27T18:16:48Z" | python | "2023-08-07T17:40:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,201 | ["airflow/auth/managers/fab/views/user.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/auth/managers/fab/views/user_edit.py", "airflow/auth/managers/fab/views/user_stats.py", "airflow/www/fab_security/views.py", "airflow/www/security.py"] | AIP-56 - FAB AM - User's statistics view | Move user's statistics view to FAB Auth manager | https://github.com/apache/airflow/issues/32201 | https://github.com/apache/airflow/pull/33055 | 2d7460450dda5cc2f20d1e8cd9ead9e4d1946909 | 66254e42962f63d6bba3370fea40e082233e153d | "2023-06-27T18:16:43Z" | python | "2023-08-07T17:40:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,199 | ["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Permissions view | Move permissions view to FAB Auth manager | https://github.com/apache/airflow/issues/32199 | https://github.com/apache/airflow/pull/33277 | 5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f | 39aee60b33a56eee706af084ed1c600b12a0dd57 | "2023-06-27T18:16:38Z" | python | "2023-08-11T15:11:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,198 | ["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Actions view | Move actions view to FAB Auth manager | https://github.com/apache/airflow/issues/32198 | https://github.com/apache/airflow/pull/33277 | 5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f | 39aee60b33a56eee706af084ed1c600b12a0dd57 | "2023-06-27T18:16:33Z" | python | "2023-08-11T15:11:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,197 | ["airflow/auth/managers/fab/views/permissions.py", "airflow/www/security.py"] | AIP-56 - FAB AM - Resources view | Move resources view to FAB Auth manager | https://github.com/apache/airflow/issues/32197 | https://github.com/apache/airflow/pull/33277 | 5f8f25b34c9e8c0d4845b014fc8f1b00cc2e766f | 39aee60b33a56eee706af084ed1c600b12a0dd57 | "2023-06-27T18:16:27Z" | python | "2023-08-11T15:11:55Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,196 | ["airflow/auth/managers/base_auth_manager.py", "airflow/auth/managers/fab/fab_auth_manager.py", "airflow/auth/managers/fab/views/__init__.py", "airflow/auth/managers/fab/views/user_details.py", "airflow/www/extensions/init_appbuilder.py", "airflow/www/fab_security/views.py", "airflow/www/security.py", "airflow/www/templates/appbuilder/navbar_right.html"] | AIP-56 - FAB AM - Move profile view into auth manager | The profile view (`/users/userinfo/`) needs to be moved to FAB auth manager. The profile URL needs to be returned as part of `get_url_account()` as specified in the AIP | https://github.com/apache/airflow/issues/32196 | https://github.com/apache/airflow/pull/32756 | f17bc0f4bf15504833f2c8fd72d947c2ddfa55ed | f2e93310c43b7e9df1cbe33350b91a8a84e938a2 | "2023-06-27T18:16:22Z" | python | "2023-07-26T14:20:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,193 | ["airflow/auth/managers/base_auth_manager.py", "airflow/auth/managers/fab/fab_auth_manager.py", "airflow/www/auth.py", "airflow/www/extensions/init_appbuilder.py", "airflow/www/extensions/init_security.py", "airflow/www/templates/appbuilder/navbar_right.html", "tests/auth/managers/fab/test_fab_auth_manager.py", "tests/www/views/test_session.py"] | AIP-56 - FAB AM - Logout | Move the logout logic to the auth manager | https://github.com/apache/airflow/issues/32193 | https://github.com/apache/airflow/pull/32819 | 86193f560815507b9abf1008c19b133d95c4da9f | 2b0d88e450f11af8e447864ca258142a6756126d | "2023-06-27T18:16:04Z" | python | "2023-07-31T19:20:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,153 | ["airflow/www/static/js/api/useExtraLinks.ts", "airflow/www/static/js/dag/details/taskInstance/ExtraLinks.tsx", "airflow/www/static/js/dag/details/taskInstance/index.tsx"] | Support extra link per mapped task in grid view | ### Description
Currently extra links are disabled in mapped tasks summary but if we select the mapped task with a map_index the extra link still remains disabled. Since we support passing map_index to get the relevant extra link it would be helpful to have the appropriate link displayed.
### Use case/motivation
This will be useful for scenarios where are there are high number of mapped tasks that are linked to an external url or resource.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32153 | https://github.com/apache/airflow/pull/32154 | 5c0fca6440fae3ece915b365e1f06eb30db22d81 | 98c47f48e1b292d535d39cc3349660aa736d76cd | "2023-06-26T19:15:37Z" | python | "2023-06-28T15:22:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,106 | ["airflow/providers/google/cloud/transfers/bigquery_to_gcs.py", "airflow/providers/google/cloud/transfers/gcs_to_bigquery.py", "tests/providers/google/cloud/transfers/test_bigquery_to_gcs.py", "tests/providers/google/cloud/transfers/test_gcs_to_bigquery.py"] | GCSToBigQueryOperator and BigQueryToGCSOperator do not respect their project_id arguments | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We experienced this issue Airflow 2.6.1, but the problem exists in the Google provider rather than core Airflow, and were introduced with [these changes](https://github.com/apache/airflow/pull/30053/files). We are using version 10.0.0 of the provider.
The [issue](https://github.com/apache/airflow/issues/29958) that resulted in these changes seems to be based on an incorrect understanding of how projects interact in BigQuery -- namely that the project used for storage and the project used for compute can be separate. The user reporting the issue appears to mistake an error about compute (`User does not have bigquery.jobs.create permission in project {project-A}.` for an error about storage, and this incorrect diagnosis resulted in a fix that inappropriately defaults the compute project to the project named in destination/source (depending on the operator) table.
The change attempts to allow users to override this (imo incorrect) default, but unfortunately this does not currently work because `self.project_id` gets overridden with the named table's project [here](https://github.com/apache/airflow/pull/30053/files#diff-875bf3d1bfbba7067dc754732c0e416b8ebe7a5b722bc9ac428b98934f04a16fR512) and [here](https://github.com/apache/airflow/pull/30053/files#diff-875bf3d1bfbba7067dc754732c0e416b8ebe7a5b722bc9ac428b98934f04a16fR587).
### What you think should happen instead
I think that the easiest fix would be to revert the change, and return to defaulting the compute project to the one specified in the default google cloud connection. However, since I can understand the desire to override the `project_id`, I think handling it correctly, and clearly distinguishing between the concepts of storage and compute w/r/t projects would also work.
### How to reproduce
Attempt to use any other project for running the job, besides the one named in the source/destination table
### Operating System
debian
### Versions of Apache Airflow Providers
apache-airflow-providers-google==10.0.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32106 | https://github.com/apache/airflow/pull/32232 | b3db4de4985eccb859a30a07a2350499370c6a9a | 2d690de110825ba09b9445967b47c44edd8f151c | "2023-06-23T19:08:10Z" | python | "2023-07-06T23:12:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,045 | ["airflow/executors/celery_executor_utils.py", "tests/integration/executors/test_celery_executor.py"] | Celery Executor cannot connect to the database to get information, resulting in a scheduler exit abnormally | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
We use Celery Executor where using RabbitMQ as a broker and postgresql as a result backend
Airflow Version: 2.2.3
Celery Version: 5.2.3
apache-airflow-providers-celery==2.1.0
Below is the error message:
```
_The above exception was the direct cause of the following exception: Traceback (most recent call last):
File"/app/airflow2.2.3/airflow/airflow/jobs/schedulerjob.py”, line 672, in _execute self._run_scheduler_loop()
File"/app/airflow2.2.3/airflow/airflow/jobs/scheduler_job.py", line 754, in _run_scheduler_loop self.executor.heartbeat()
File"/app/airflow2.2.3/airflow/airflow/executors/base_executor.py”, line 168, in heartbeat self.sync()
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”, line 330, in sync self.update_all_task_states()
File"/app/airflow223/airflow/airflow/executors/celery_executor.py”,line 442,in update_all_task_states state_and_info_by_celery_task_id=self.bulk_state_fetcher.get_many(self.tasks. values()) File"/app/airflow2.2.3/airflow/airflow/executors/celery_executorpy”,line 598, in get_many result = self._get many_from db backend(async_results)
File"/app/airflow2.2.3/airflow/airflow/executors/celery_executor.py”,line 618, in get_many_from_db_backend tasks-session.query(task_cls).filter(task_cls.task_id.in(task_ids)).all()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3373, in all return list(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”, line 3535, in iter return self._execute_and_instances(context)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3556, in _execute_and_instances conn =self._get bind args(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/orm/query.py”, line 3571, in _get_bind_args return fn(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/query.py”,line 3550, in _connection_from_session conn=self.session.connection(**kw)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 1142, in connection return self._connection_for_bind(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/orm/session.py”,line 1150, in _connection_for_bind return self.transaction.connection_for bind(
File“/app/airflow2.2.3/airflow2_env/Iib/python3.8/site-packages/sqlalchemy/orm/session.py”, line 433, in _connection_for_bind conn=bind._contextual_connect()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”,line 2302, in _contextual_connect self._wrap_pool_connect(self.pool.connect,None),
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2339, in _wrap_pool_connect
Tracking Connection.handle dbapi_exception_noconnection(
File "/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 1583,in handle_dbapi_exception_noconnection util.raise (
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/compat.py”, line 182, in raise
ents raise exception File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/engine/base.py”, line 2336, in _wrap_pool_connect
return fn()
2023-06-05 16:39:05.069 ERROR -Exception when executing SchedulerJob. run scheduler loop Traceback (most recent call last):
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/base.py”, line 2336,in _wrap_pool_connect return fno
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 364, in connect returnConnectionFairy.checkout(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 778, in _checkout fairy=ConnectionRecordcheckout(pool)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 495, in checkout rec=pool. do_get()
File“/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/impl.py”, line 241, in _do_get return self._createconnection()
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/salalchemy/pool/base.py”, line 309, in _create_connection return _ConnectionRecord(self)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/sitepackages/sqlalchemy/pool/base.py”, line 440, in init self. connect(firstconnectcheck=True)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 661, in connect pool.logger.debug"Error onconnect(:%s",e)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/util/langhelpers.py”, line 68, in exit compat.raise(
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/salalchemy/util/compat.py", line 182, in raise raise exception
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/pool/base.py”, line 656, in _connect connection =pool.invoke_creator(sel f)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/strategies.py”, line 114, in connect return dialect.connect(*cargs, **cparans)
File"/app/airflow2.2.3/airflow2_env/1ib/python3.8/site-packages/sqlalchemy/engine/default.py”,line 508, in connect return self.dbapi.connect(*cargs, **cparams)
File"/app/airflow2.2.3/airflow2_env/lib/python3.8/site-packages/psycopg2/init.py”, line 126, in connect conn=connect(dsn,connection_factory=connection_factory, **kwasync) psycopg2.0perationalError: could not connect to server: Connection timed out
Is the server running on host"xxxxxxxxxx”and accepting TCP/IP connections on port 5432?
```
### What you think should happen instead
I think it may be caused by network jitter issues, add retries to solve it
### How to reproduce
celeryExecutor fails to create a PG connection while retrieving metadata information, and it can be reproduced
### Operating System
NAME="RedFlag Asianux" VERSION="7 (Lotus)"
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32045 | https://github.com/apache/airflow/pull/31998 | de585f521b5898ba7687072a7717fd3b67fa8c5c | c3df47efc2911706897bf577af8a475178de4b1b | "2023-06-21T08:09:17Z" | python | "2023-06-26T17:01:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,020 | ["airflow/cli/cli_config.py", "airflow/cli/commands/task_command.py", "airflow/utils/cli.py", "tests/cli/commands/test_task_command.py"] | Airflow tasks run -m cli command giving 504 response | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Hello ,
we are facing "504 Gateway Time-out" error when running "tasks run -m" CLI command. We are trying to create Complex DAG and run tasks from cli command. When we are trying to run "tasks run -m" then we received gateway timeout error.
We also observed high resources spike in web server when we tried to execute this cli command. After looking into further, when we run "task run -m" Airflow CLI command, what it does is it parses the list of DAGs and the parses through the task list. Because this , we observed high resources of webserver and received gateway timeout error.
### What you think should happen instead
We are expecting that when to execute "tasks run" cli command, it should only parse DAG name and task name provided in the command and not parse DAG lists followed by task list.
### How to reproduce
please follow below steps to reproduce this issue.
1. we have 900 DAG in airflow environment.
2. we have created web login token to access web server.
3. after that we tried to run "tasks run" using python script
### Operating System
Amazon linux
### Versions of Apache Airflow Providers
Airflow version 2.2.2
### Deployment
Amazon (AWS) MWAA
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32020 | https://github.com/apache/airflow/pull/32038 | d49fa999a94a2269dd6661fe5eebbb4c768c7848 | 05a67efe32af248ca191ea59815b3b202f893f46 | "2023-06-20T08:51:13Z" | python | "2023-06-23T22:31:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 32,002 | ["setup.cfg", "setup.py"] | log url breaks on login redirect | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
on 2.5.3:
log url is https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00%2B00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1
this url works when I am logged in.
If I am logged out, the login screen will redirect me to https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00+00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1 which shows me an empty log.
the redirect seems to convert the `%2B` back to a `+` in the timezone component of the execution_date, while leaving all other escaped characters untouched.
### What you think should happen instead
log url works correctly after login redirect
### How to reproduce
https://airflow.hostname.de/log?execution_date=2023-06-19T04%3A00%3A00%2B00%3A00&task_id=task_name&dag_id=dag_name&map_index=-1
have a log url with a execution date using a timezone with a positive utc offset
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/32002 | https://github.com/apache/airflow/pull/32054 | e39362130b8659942672a728a233887f0b02dc8b | 92497fa727a23ef65478ef56572c7d71427c4a40 | "2023-06-19T11:14:59Z" | python | "2023-07-08T19:18:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,957 | ["airflow/providers/cncf/kubernetes/executors/kubernetes_executor.py", "docs/apache-airflow/administration-and-deployment/logging-monitoring/metrics.rst"] | Airflow Observability Improvement Request | ### Description
We have a scheduler house keeping work (adopt_or_reset_orphaned_tasks, check_trigger_timeouts, _emit_pool_metrics, _find_zombies, clear_not_launched_queued_tasks and _check_worker_pods_pending_timeout) runs on certain frequency. Right now, we don't have any latency metrics on these house keeping work. These will impact the scheduler heartbeat. Its good idea to capture these latency metrics to identify and tune the airflow configuration
### Use case/motivation
As we run the airflow at a large scale, we have found that the adopt_or_reset_orphaned_tasks and clear_not_launched_queued_tasks functions might take time in a few minutes (> 5 minutes). These will delay the heartbeat of the scheduler and leads to the scheduler instance restarting/killed. In order to detect these latency issues, we need better metrics to capture these latencies.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31957 | https://github.com/apache/airflow/pull/35579 | 5a6dcfd8655c9622f3838a0e66948dc3091afccb | cd296d2068b005ebeb5cdc4509e670901bf5b9f3 | "2023-06-16T10:19:09Z" | python | "2023-11-12T17:41:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,949 | ["airflow/www/static/js/dag/details/graph/Node.tsx", "airflow/www/static/js/types/index.ts", "airflow/www/static/js/utils/graph.ts"] | Support Operator User Interface Elements in new graph view | ### Description
The new graph UI looks good but currently doesn't support the color options mentioned here https://airflow.apache.org/docs/apache-airflow/stable/howto/custom-operator.html#user-interface
### Use case/motivation
It would be great for these features to be supported in the new grid view as they are in the old one
### Related issues
[slack](https://apache-airflow.slack.com/archives/CCPRP7943/p1686866630874739?thread_ts=1686865767.351809&cid=CCPRP7943)
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31949 | https://github.com/apache/airflow/pull/32822 | 12a760f6df831c1d53d035e4d169a69887e8bb26 | 3bb63f1087176b24e9dc8f4cc51cf44ce9986d34 | "2023-06-15T22:54:54Z" | python | "2023-08-03T09:06:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,907 | ["dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "images/breeze/output-commands-hash.txt", "images/breeze/output-commands.svg", "images/breeze/output_testing.svg", "images/breeze/output_testing_tests.svg"] | Add `--use-airflow-version` option to `breeze testing tests` command | ### Description
The option `--use-airflow-version` is available under the command `start-airflow` in `Breeze`. As an example, this is used when testing a release candidate as specified in [documentation](https://github.com/apache/airflow/blob/main/dev/README_RELEASE_AIRFLOW.md#verify-release-candidates-by-contributors): `breeze start-airflow --use-airflow-version <VERSION>rc<X> --python 3.8 --backend postgres`.
The idea I have is to add that option as well under the command `testing tests` in `Breeze`.
### Use case/motivation
Having the option `--use-airflow-version` available under the command `testing tests` in `Breeze` would make it possible to run system tests against a specific version of Airflow and provider. This could be helpful when releasing new version of Airflow and Airflow providers. As such, providers could run all system tests of their provider package on demand and share these results (somehow, a dashboard?, another way?) to the community/release manager. This would not replace the manual testing already in place that is needed when releasing such new version but would give more information/pointers to the release manager.
Before submitting a PR, I wanted to have first some feedbacks about this idea. This might not be possible? This might not be a good idea? This might not be useful at all?
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31907 | https://github.com/apache/airflow/pull/31914 | 518b93c24fda6e7a1df0acf0f4dd1921967dc8f6 | b07a26523fad4f17ceb4e3a2f88e043dcaff5e53 | "2023-06-14T18:35:02Z" | python | "2023-06-14T23:44:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,891 | ["docs/apache-airflow-providers-google/api-auth-backend/google-openid.rst"] | Incorrect audience argument in Google OpenID authentication doc | ### What do you see as an issue?
I followed the [Google OpenID authentication doc](https://airflow.apache.org/docs/apache-airflow-providers-google/stable/api-auth-backend/google-openid.html) and got this error:
```
$ ID_TOKEN="$(gcloud auth print-identity-token "--audience=${AUDIENCE}")"
ERROR: (gcloud.auth.print-identity-token) unrecognized arguments: --audience=759115288429-c1v16874eprg4455kt1di902b3vkjho2.apps.googleusercontent.com (did you mean '--audiences'?)
To search the help text of gcloud commands, run:
gcloud help -- SEARCH_TERMS
```
Perhaps the gcloud CLI parameter changed since this doc was written.
### Solving the problem
Update the CLI argument in the doc.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31891 | https://github.com/apache/airflow/pull/31893 | ca13c7b77ea0e7d37bfe893871bab565d26884d0 | fa07812d1013f964a4736eade3ba3e1a60f12692 | "2023-06-14T09:05:50Z" | python | "2023-06-23T10:23:44Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,873 | ["airflow/models/variable.py", "tests/models/test_variable.py"] | KubernetesPodOperator doesn't mask variables in Rendered Template that are used as arguments | ### Apache Airflow version
2.6.1
### What happened
I am pulling a variable from Google Secret Manager and I'm using it as an argument in a KubernetesPodOperator task. I've also tried it with the KubernetesPodOperatorAsync operator and I'm getting the same behaviour.
The variable value is not masked on Rendered Template page. If I use the exact same variable in a different operator, like the HttpSensorAsync, it is properly masked. That is quite critical and I can't deploy the DAG to production.
### What you think should happen instead
The variable in the KubernetesPodOperator should be masked and only '***' should be shown in the Rendered Template page
### How to reproduce
Here's the example of code where I use the exact same variable in two different Operators. It's in the arguments of the Kubernetes Operator and then used in a different operator next.
```
my_changeset = KubernetesPodOperator(
task_id='my_load',
namespace=kubernetes_namespace,
service_account_name=service_account_name,
image='my-feed:latest',
name='changeset_load',
in_cluster=in_cluster,
cluster_context='docker-desktop', # is ignored when in_cluster is set to True
is_delete_operator_pod=True,
get_logs=True,
image_pull_policy=image_pull_policy,
arguments=[
'-k{{ var.json.faros_api_key.faros_api_key }}',
],
container_resources=k8s.V1ResourceRequirements(requests=requests, limits=limits),
volumes=volumes,
volume_mounts=volume_mounts,
log_events_on_failure=True,
startup_timeout_seconds=60 * 5,
)
test_var = HttpSensorAsync(
task_id=f'wait_for_my_file',
http_conn_id='my_paymentreports_http',
endpoint='{{ var.json.my_paymentreports_http.client_id }}/report',
headers={'user-agent': 'King'},
request_params={
'access_token': '{{ var.json.faros_api_key.faros_api_key }}',
},
response_check=lambda response: True if response.status_code == 200 else False,
extra_options={'check_response': False},
timeout=60 * 60 * 8,
)
```
The same {{ var.json.faros_api_key.faros_api_key }} is used in both operators, but only masked in the HttpSensorAsync operator.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow==2.6.1+astro.3
apache-airflow-providers-amazon==8.1.0
apache-airflow-providers-celery==3.2.0
apache-airflow-providers-cncf-kubernetes==7.0.0
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-datadog==3.3.0
apache-airflow-providers-elasticsearch==4.5.0
apache-airflow-providers-ftp==3.4.1
apache-airflow-providers-github==2.3.0
apache-airflow-providers-google==10.0.0
apache-airflow-providers-hashicorp==3.4.0
apache-airflow-providers-http==4.4.1
apache-airflow-providers-imap==3.2.1
apache-airflow-providers-microsoft-azure==6.1.1
apache-airflow-providers-mysql==5.1.0
apache-airflow-providers-postgres==5.5.0
apache-airflow-providers-redis==3.2.0
apache-airflow-providers-samba==4.2.0
apache-airflow-providers-sendgrid==3.2.0
apache-airflow-providers-sftp==4.3.0
apache-airflow-providers-slack==7.3.0
apache-airflow-providers-sqlite==3.4.1
apache-airflow-providers-ssh==3.7.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31873 | https://github.com/apache/airflow/pull/31964 | fc0e5a4d42ee882ca5bc20ea65be38b2c739644d | e22ce9baed19ddf771db59b7da1d25e240430625 | "2023-06-13T11:25:23Z" | python | "2023-06-16T19:05:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,851 | ["airflow/cli/cli_config.py", "airflow/cli/commands/connection_command.py", "airflow/cli/commands/variable_command.py", "airflow/cli/utils.py"] | Allow variables to be printed to STDOUT | ### Description
Currently, the `airflow variables export` command requires an explicit file path and does not support output to stdout. However connections can be printed to stdout using `airflow connections export -`. This inconsistency between the two export commands can lead to confusion and limits the flexibility of the variables export command.
### Use case/motivation
To bring some consistency, similar to connections, variables should also be printed to STDOUT, using `-` instead of filename.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31851 | https://github.com/apache/airflow/pull/33279 | bfa09da1380f0f1e0727dbbc9f1878bd44eb848d | 09d478ec671f8017294d4e15d75db1f40b8cc404 | "2023-06-12T02:56:23Z" | python | "2023-08-11T09:02:48Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,834 | ["airflow/providers/microsoft/azure/log/wasb_task_handler.py", "airflow/providers/redis/log/__init__.py", "airflow/providers/redis/log/redis_task_handler.py", "airflow/providers/redis/provider.yaml", "airflow/utils/log/file_task_handler.py", "docs/apache-airflow-providers-redis/index.rst", "docs/apache-airflow-providers-redis/logging/index.rst", "tests/providers/redis/log/__init__.py", "tests/providers/redis/log/test_redis_task_handler.py"] | Redis task handler for logs | ### Discussed in https://github.com/apache/airflow/discussions/31832
<div type='discussions-op-text'>
<sup>Originally posted by **michalc** June 10, 2023</sup>
Should something like the below be in the codebase? It's a simple handler for storing Airflow task logs in Redis, enforcing a max number of entries per try, and an expiry time for the logs
Happy to raise a PR (and I guessed a lot at how things should be... so suspect can be improved upon...)
```python
class RedisHandler(logging.Handler):
def __init__(self, client, key):
super().__init__()
self.client = client
self.key = key
def emit(self, record):
p = self.client.pipeline()
p.rpush(self.key, self.format(record))
p.ltrim(self.key, start=-10000, end=-1)
p.expire(self.key, time=60 * 60 * 24 * 28)
p.execute()
class RedisTaskHandler(FileTaskHandler, LoggingMixin):
"""
RedisTaskHandler is a python log handler that handles and reads
task instance logs. It extends airflow FileTaskHandler and
uploads to and reads from Redis.
"""
trigger_should_wrap = True
def __init__(self, base_log_folder: str, redis_url):
super().__init__(base_log_folder)
self.handler = None
self.client = redis.Redis.from_url(redis_url)
def _read(
self,
ti,
try_number,
metadata=None,
):
log_str = b"\n".join(
self.client.lrange(self._render_filename(ti, try_number), start=0, end=-1)
).decode("utf-8")
return log_str, {"end_of_log": True}
def set_context(self, ti):
super().set_context(ti)
self.handler = RedisHandler(
self.client, self._render_filename(ti, ti.try_number)
)
self.handler.setFormatter(self.formatter)
```</div> | https://github.com/apache/airflow/issues/31834 | https://github.com/apache/airflow/pull/31855 | 6362ba5ab45a38008814616df4e17717cc3726c3 | 42b4b43c4c2ccf0b6e7eaa105c982df495768d01 | "2023-06-10T17:38:26Z" | python | "2023-07-23T06:43:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,819 | ["docs/apache-airflow/authoring-and-scheduling/deferring.rst"] | Improve the docs around deferrable mode for Sensors | ### Body
With Operators the use case of deferrable operators is pretty clear.
However with Sensor new questions are raised.
All Sensors inherit from `BaseSensorOperator` which adds [mode](https://github.com/apache/airflow/blob/a98621f4facabc207b4d6b6968e6863845e1f90f/airflow/sensors/base.py#L93) parameter a question comes to mind what is the difference between:
`SomeSensor(..., mode='reschedule')`
to:
`SomeSensor(..., deferrable=true)`
Thus unlike Operators, when working with Sensors and assuming the sensor has defer implemented the users have a choice of what to use and both can be explained as "something is not ready, lets wait without consuming resources".
The docs should clarify the difference and compare between the two options that might look the same but are different.
We should explain it in two fronts:
1. What happens in Airflow for each one of the options (task state in `defer` mode vs `up_for_reschedule`) etc...
2. What is the motivation/justification for each one. pros and cons.
3. Do we have some kind of general recommendation as "always prefer X over Y" or "In executor X better to use one of the options" etc...
also wondering what `SomeSensor(..., , mode='reschedule', deferrable=true)` means and if we are protected against such usage.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/31819 | https://github.com/apache/airflow/pull/31840 | 371833e076d033be84f109cce980a6275032833c | 0db0ff14da449dc3dbfe9577ccdb12db946b9647 | "2023-06-09T18:33:27Z" | python | "2023-06-24T16:40:26Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,818 | ["airflow/cli/cli_config.py", "airflow/cli/commands/db_command.py", "tests/cli/commands/test_db_command.py"] | Add retry + timeout to Airflow db check | ### Description
In my company usage of Airflow, developmental instances of Airflow run on containerized PostgreSQL that are spawned at the same time the Airflow container is spawned. Before the Airflow container runs its initialization scripts, it needs to make sure that the PostgreSQL instance can be reached, for which `airflow db check` is a great option.
However, there is a non-deterministic race condition between the PSQL container and Airflow containers (not sure which will each readiness first and by how much), calling the `airflow db check` command once is not sufficient, and implementing a retry-timeout in shell script is feasible but unpleasant.
It would be great if the `airflow db check` command can take two additional optional arguments: `--retry` and `--retry-delay` (just like with `curl`) so that the database connection can be checked repeatedly for up to a specified number of times. This command should exit with `0` exit code if any of the retries succeeds, and `1` if all of the retries failed.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31818 | https://github.com/apache/airflow/pull/31836 | a81ac70b33a589c58b59864df931d3293fada382 | 1b35a077221481e9bf4aeea07d1264973e7f3bf6 | "2023-06-09T18:07:59Z" | python | "2023-06-15T08:54:09Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,795 | ["airflow/providers/apache/kafka/triggers/await_message.py"] | AwaitMessageTriggerFunctionSensor not firing all eligble messages | ### Apache Airflow version
2.6.1
### What happened
The AwaitMessageTriggerFunctionSensor is showing some buggy behaviour.
When consuming from a topic, it is correctly applying the apply_function in order to yield a TriggerEvent.
However, it is consuming multiple messages at a time and not yielding a trigger for the correct amount of messages that would be eligble (return a value in the apply_function). The observed behaviour is as follows:
- Sensor is deferred and messages start getting consumed
- Multiple eligble messages trigger a single TriggerEvent instead of multiple TriggerEvents.
- The sensor returns to a deferred state , repeating the cycle.
The event_triggered_function is being called correctly. However, due to the issue in consuming and correctly generating the appropriate TriggerEvents some of them are missed.
### What you think should happen instead
Each eligble message should create an individual TriggerEvent to be consumed by the event_triggered_function.
### How to reproduce
- Use a producer DAG to produce a set amount of messages on your kafka topic
- Use a listener DAG to consume this topic, screening for eligble messages (apply_function) and use the event_trigger_function to monitor the amount of events that are being consumed.
### Operating System
Kubernetes cluster - Linux
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-kafka==1.1.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
helm chart version 1.9.0
### Anything else
Every time (independent of topic, message content, apply_function and event_triggered_function)
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31795 | https://github.com/apache/airflow/pull/31803 | ead2530d3500dd27df54383a0802b6c94828c359 | 1b599c9fbfb6151a41a588edaa786745f50eec38 | "2023-06-08T14:24:33Z" | python | "2023-06-30T09:26:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,753 | ["airflow/providers/databricks/operators/databricks_sql.py", "tests/providers/databricks/operators/test_databricks_sql.py"] | AttributeError exception when returning result to XCom | ### Apache Airflow version
2.6.1
### What happened
When i use _do_xcom_push=True_ in **DatabricksSqlOperator** the an exception with following stack trace is thrown:
```
[2023-06-06, 08:52:24 UTC] {sql.py:375} INFO - Running statement: SELECT cast(max(id) as STRING) FROM prod.unified.sessions, parameters: None
[2023-06-06, 08:52:25 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2354, in xcom_push
XCom.set(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 237, in set
value = cls.serialize_value(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 632, in serialize_value
return json.dumps(value, cls=XComEncoder).encode("UTF-8")
File "/usr/local/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 102, in encode
o = self.default(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 91, in default
return serialize(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 144, in serialize
return encode(classname, version, serialize(data, depth + 1))
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 132, in serialize
qn = qualname(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/module_loading.py", line 47, in qualname
return f"{o.__module__}.{o.__name__}"
File "/home/airflow/.local/lib/python3.10/site-packages/databricks/sql/types.py", line 161, in __getattr__
raise AttributeError(item)
AttributeError: __name__. Did you mean: '__ne__'?
```
### What you think should happen instead
In _process_output() if self._output_path is False a list of tuples is returned:
```
def _process_output(self, results: list[Any], descriptions: list[Sequence[Sequence] | None]) -> list[Any]:
if not self._output_path:
return list(zip(descriptions, results))
```
I suspect this breaks the serialization somehow which might be related to my own meta database(postgres).
Replacing the Databricks SQL Operator with simple **PythonOperator** and **DatabricksSqlHook** works just fine:
```
def get_max_id(ti):
hook = DatabricksSqlHook(databricks_conn_id=databricks_sql_conn_id, sql_endpoint_name='sql_endpoint')
sql = "SELECT cast(max(id) as STRING) FROM prod.unified.sessions"
return str(hook.get_first(sql)[0])
```
### How to reproduce
```
get_max_id_task = DatabricksSqlOperator(
databricks_conn_id=databricks_sql_conn_id,
sql_endpoint_name='sql_endpoint',
task_id='get_max_id',
sql="SELECT cast(max(id) as STRING) FROM prod.unified.sessions",
do_xcom_push=True
)
```
### Operating System
Debian GNU/Linux 11 (bullseye) docker image, python 3.10
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.5.1
databricks-sql-connector==2.5.2
apache-airflow-providers-databricks==4.2.0
### Deployment
Docker-Compose
### Deployment details
Using extended Airflow image, LocalExecutor, Postgres 13 meta db as container in the same stack.
docker-compose version 1.29.2, build 5becea4c
Docker version 23.0.5, build bc4487a
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31753 | https://github.com/apache/airflow/pull/31780 | 1aa9e803c26b8e86ab053cfe760153fc286e177c | 049c6184b730a7ede41db9406654f054ddc8cc5f | "2023-06-07T06:44:13Z" | python | "2023-06-08T10:49:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,750 | ["airflow/providers/google/cloud/transfers/sql_to_gcs.py", "tests/providers/google/cloud/transfers/test_sql_to_gcs.py"] | BaseSQLToGCSOperator creates row group for each rows during parquet generation | ### Apache Airflow version
Other Airflow 2 version (please specify below)
Airflow 2.4.2
### What happened
BaseSQLToGCSOperator creates row group for each rows during parquet generation, which cause compression not work and increase file size.

### What you think should happen instead
_No response_
### How to reproduce
OracleToGCSOperator(
task_id='oracle_to_gcs_parquet_test',
gcp_conn_id=GCP_CONNECTION,
oracle_conn_id=ORACLE_CONNECTION,
sql='',
bucket=GCS_BUCKET_NAME,
filename='',
export_format='parquet',
)
### Operating System
CentOS Linux 7
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-hive 2.1.0
apache-airflow-providers-apache-sqoop 2.0.2
apache-airflow-providers-celery 3.0.0
apache-airflow-providers-common-sql 1.2.0
apache-airflow-providers-ftp 3.1.0
apache-airflow-providers-google 8.4.0
apache-airflow-providers-http 4.0.0
apache-airflow-providers-imap 3.0.0
apache-airflow-providers-mysql 3.0.0
apache-airflow-providers-oracle 2.1.0
apache-airflow-providers-salesforce 5.3.0
apache-airflow-providers-sqlite 3.2.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31750 | https://github.com/apache/airflow/pull/31831 | ee83a2fbd1a65e6a5c7d550a39e1deee49856270 | b502e665d633262f3ce52d9c002c0a25e6e4ec9d | "2023-06-07T03:06:11Z" | python | "2023-06-14T12:05:05Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,745 | ["airflow/providers/cncf/kubernetes/operators/pod.py", "airflow/providers/cncf/kubernetes/utils/pod_manager.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "tests/providers/cncf/kubernetes/utils/test_pod_manager.py"] | Add a process_line callback to KubernetesPodOperator | ### Description
Add a process_line callback to KubernetesPodOperator
Like https://github.com/apache/airflow/blob/main/airflow/providers/apache/beam/operators/beam.py#LL304C36-L304C57 the `BeamRunPythonPipelineOperator`, which allows the user to add stateful plugins based on the logging from docker job
### Use case/motivation
I can add a plugin based on the logging and also allow cleanup on_kill based on the job creation log.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31745 | https://github.com/apache/airflow/pull/34153 | d800a0de5194bb1ef3cfad44c874abafcc78efd6 | b5057e0e1fc6b7a47e38037a97cac862706747f0 | "2023-06-06T18:40:42Z" | python | "2023-09-09T18:08:29Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,726 | ["airflow/models/taskinstance.py", "airflow/www/extensions/init_wsgi_middlewares.py", "tests/www/test_app.py"] | redirect to same url after set base_url | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
```
$ curl localhost:8080/airflow/
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="http://localhost:8080/airflow/">http://localhost:8080/airflow/</a>. If not, click the link.
```
### What you think should happen instead
At least not circular redirect.
### How to reproduce
generate yaml:
```
helm template --name-template=airflow ~/downloads/airflow > airflow.yaml
```
add base_url in webserver section and remove health and ready check in webserver(to keep pod alive):
```
[webserver]
enable_proxy_fix = True
rbac = True
base_url = http://my.domain.com/airflow/
```
### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
apache-airflow==2.5.3
apache-airflow-providers-amazon==7.3.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-docker==3.5.1
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==8.11.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.0
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.2.1
apache-airflow-providers-mysql==4.0.2
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.4
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-ssh==3.5.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31726 | https://github.com/apache/airflow/pull/31833 | 69bc90b82403b705b3c30176cc3d64b767f2252e | fe4a6c843acd97c776d5890116bfa85356a54eee | "2023-06-06T02:39:47Z" | python | "2023-06-19T07:29:11Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,668 | ["docs/apache-airflow/core-concepts/dags.rst"] | Schedule "@daily" is wrongly declared in the "DAG/Core Concepts" | ### What do you see as an issue?
I found a small bug in the DAG Core Concepts documentation regarding the `@daily`schedule:
https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html#running-dags
DAGs do not require a schedule, but it’s very common to define one. You define it via the `schedule` argument, like this:
```python
with DAG("my_daily_dag", schedule="@daily"):
...
```
The `schedule` argument takes any value that is a valid [Crontab](https://en.wikipedia.org/wiki/Cron) schedule value, so you could also do:
```python
with DAG("my_daily_dag", schedule="0 * * * *"):
...
```
If I'm not mistaken, the daily crontab notation should be `0 0 * * *` instead of `0 * * * *`, otherwise the DAG would run every hour.
The second `0`of course needs to be replaced with the hour, at which the DAG should run daily.
### Solving the problem
I would change the documentation at the marked location:
The `schedule` argument takes any value that is a valid [Crontab](https://en.wikipedia.org/wiki/Cron) schedule value, so for a daily run at 00:00, you could also do:
```python
with DAG("my_daily_dag", schedule="0 0 * * *"):
...
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31668 | https://github.com/apache/airflow/pull/31666 | 4ebf1c814c6e382169db00493a897b11c680e72b | 6a69fbb10c08f30c0cb22e2ba68f56f3a5d465aa | "2023-06-01T12:35:33Z" | python | "2023-06-01T14:36:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,656 | ["airflow/decorators/base.py", "tests/decorators/test_setup_teardown.py"] | Param on_failure_fail_dagrun should be overridable through `task.override` | Currently when you define a teardown
e.g.
```
@teardown(on_failure_fail_dagrun=True)
def my_teardown():
...
```
You can not change this when you instantiate the task
e.g. with
```
my_teardown.override(on_failure_fail_dagrun=True)()
```
I don't think this is good because if you define a reusable taskflow function then it might depend on the context.
| https://github.com/apache/airflow/issues/31656 | https://github.com/apache/airflow/pull/31665 | 29d2a31dc04471fc92cbfb2943ca419d5d8a6ab0 | 8dd194493d6853c2de80faee60d124b5d54ec3a6 | "2023-05-31T21:45:59Z" | python | "2023-06-02T05:26:40Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,636 | ["airflow/providers/amazon/aws/operators/ecs.py", "airflow/providers/amazon/aws/triggers/ecs.py", "airflow/providers/amazon/aws/utils/task_log_fetcher.py", "airflow/providers/amazon/provider.yaml", "tests/providers/amazon/aws/operators/test_ecs.py", "tests/providers/amazon/aws/triggers/test_ecs.py", "tests/providers/amazon/aws/utils/test_task_log_fetcher.py"] | Add deferrable mode to EcsRunTaskOperator | ### Description
I would greatly appreciate it if the `EcsRunTaskOperator` could incorporate the `deferrable` mode. Currently, this operator significantly affects the performance of my workers, and running multiple instances simultaneously proves to be inefficient. I have noticed that the `KubernetesPodOperator` already supports this functionality, so having a similar feature available for ECS would be a valuable addition.
Note: This feature request relates to the `amazon` provider.
### Use case/motivation
Reduce resource utilisation of my worker when running multiple ecs run task operator in paralllel.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31636 | https://github.com/apache/airflow/pull/31881 | e4ca68818eec0f29ef04a1a5bfec3241ea03bf8c | 415e0767616121854b6a29b3e44387f708cdf81e | "2023-05-31T09:40:58Z" | python | "2023-06-23T17:13:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,612 | ["airflow/providers/presto/provider.yaml", "generated/provider_dependencies.json"] | [airflow 2.4.3] presto queries returning none following upgrade to common.sql provider | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
After upgrading apache-airflow-providers-common-sql from 1.2.0 to anything above 1.3.0 presto queries using the get_records() and or get_first() function returns none.
using the same query -- `select 1`:
1.2.0: `Done. Returned value was: [[1]]`
1.3.0 and above:
```
Running statement: select 1, parameters: None
[2023-05-30, 11:57:37 UTC] {{python.py:177}} INFO - Done. Returned value was: None
```
### What you think should happen instead
i would expect that running the query `select 1` on presto would provide the same result when the environment is running apache-airflow-providers-common-sql 1.2.0 or apache-airflow-providers-common-sql 1.5.1.
### How to reproduce
run the following query: `PrestoHook(conn_id).get_records(`select 1`)`
ensure that the requirements are as labelled below.
### Operating System
NAME="Amazon Linux" VERSION="2" ID="amzn" ID_LIKE="centos rhel fedora"
### Versions of Apache Airflow Providers
apache-airflow==2.4.3
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-common-sql==1.5.1
apache-airflow-providers-ftp==3.1.0
apache-airflow-providers-google==8.4.0
apache-airflow-providers-http==4.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-jenkins==3.0.0
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-presto==5.1.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.2.1
apache-airflow-providers-trino==4.1.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31612 | https://github.com/apache/airflow/pull/35132 | 789222cb1378079e2afd24c70c1a6783b57e27e6 | 8ef2a9997d8b6633ba04dd9f752f504a2ce93e25 | "2023-05-30T12:19:40Z" | python | "2023-10-23T15:40:20Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,584 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"] | BigQueryInsertJobOperator not exiting deferred state | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Using Apache Airflow 2.4.3 and apache airflow google provider 8.4 (also tried with 10.1.0).
We have a query that in production should run for a long time, so we wanted to make the BigQueryInsertJobOperator deferrable.
Making the operator deferrable runs the job, but the UI and triggerer process don't seem to be notified that the operator has finished.
I have validated that the query is actually run as the data appears in the table, but the operator gets stuck in a deferred state.
### What you think should happen instead
After the big query job is finished, the operator should exit it's deferred state.
### How to reproduce
Skeleton of the code used
```
with DAG(
dag_id="some_dag_id",
schedule="@daily",
catchup=False,
start_date=pendulum.datetime(2023, 5, 8),
):
extract_data = BigQueryInsertJobOperator(
task_id="extract_data",
impersonation_chain=GCP_ASTRO_TEAM_SA.get()
params={"dst_table": _DST_TABLE, "lookback_days": _LOOKBACK_DAYS},
configuration={
"query": {
"query": "{% include 'sql/sql_file.sql' %}",
"useLegacySql": False,
}
},
outlets=DATASET,
execution_timeout=timedelta(hours=2, minutes=30),
deferrable=True,
)
```
### Operating System
Mac OS Ventura 13.3.1
### Versions of Apache Airflow Providers
apache airflow google provider 8.4 (also tried with 10.1.0).
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
We're using astro and on local the Airflow environment is started using `astro dev start`. The issue appears when running the DAG locally.
An entire other issue (may be unrelated) appears on Sandbox deployment.
### Anything else
Every time the operator is marked as deferrable.
I noticed this a few days ago (week started with Monday May 22nd 2023).
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31584 | https://github.com/apache/airflow/pull/31591 | fcbbf47864c251046de108aafdad394d66e1df23 | 81b85ebcbd241e1909793d7480aabc81777b225c | "2023-05-28T13:15:37Z" | python | "2023-07-29T07:33:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,547 | ["airflow/www/views.py"] | Tag filter doesn't sort tags alphabetically | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow v: 2.6.0
This has been an issue since 2.4.0 for us at least. We recently did a refactor of many of our 160+ DAGs and part of that process was to remove some tags that we didn't want anymore. Unfortunately, the old tags were still left behind when we deployed our new image with the updated DAGs (been a consistent thing across several Airflow versions for us). There is also the issue that the tag filter doesn't sort our tags alphabetically.
I tried to truncate the dag_tag table, and that did help to get rid of the old tags. However, the sorting issue remains. Example:

On one of our dev environments, we have just about 10 DAGs with a similar sorting problem, and the dag_tag table had 18 rows. I took a backup of it and truncated the dag_tag table, which was almost instantly refilled (I guess logs are DEBUG level on that, so I saw nothing). This did not initially fix the sorting problem, but after a couple of truncates, things got weird, and all the tags were sorted as expected, and the row count in the dag_tag table was now just 15, so 3 rows were removed in all. We also added a new DAG in there with a tag "arjun", which also got listed first - so all sorted on that environment.
Summary:
1. Truncating of the dag_tag table got rid of the old tags that we no longer have in our DAGs.
2. The tags are still sorted incorrectly in the filter (see image).
It seems that the logic here is contained in `www/static/js/dags.js.` I am willing to submit a PR if I can get some guidance :)
### What you think should happen instead
_No response_
### How to reproduce
N/A
### Operating System
debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31547 | https://github.com/apache/airflow/pull/31553 | 6f86b6cd070097dafca196841c82de91faa882f4 | 24e52f92bd9305bf534c411f9455460060515ea7 | "2023-05-25T16:08:43Z" | python | "2023-05-26T16:31:37Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,526 | ["airflow/models/skipmixin.py", "airflow/operators/python.py", "airflow/operators/subdag.py", "tests/operators/test_python.py", "tests/operators/test_subdag_operator.py"] | Short circuit task in expanded task group fails when it returns false | ### Apache Airflow version
2.6.1
### What happened
I have a short circuit task which is in a task group that is expanded. The task work correctly when it returns true but the task fails when it returns false with the following error:
```
sqlalchemy.exc.IntegrityError: (psycopg2.errors.ForeignKeyViolation) insert or update on table "xcom" violates foreign key constraint "xcom_task_instance_fkey"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(pipeline_output_to_s3, transfer_output_file.already_in_manifest_short_circuit, manual__2023-05-24T20:21:35.420606+00:00, -1) is not present in table "task_instance".
```
It looks like it sets the map-index to -1 when false is returned which is causing the issue.
If one task fails this way in the the task group, all other mapped tasks fail as well, even if the short circuit returns true.
When this error occurs, all subsequent DAG runs will be stuck indefinitely in the running state unless the DAG is deleted.
### What you think should happen instead
Returning false from the short circuit operator should skip downstream tasks without affecting other task groups / map indexes or subsequent DAG runs.
### How to reproduce
include a short circuit operator in a mapped task group and have it return false.
### Operating System
Red Hat Enterprise Linux 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31526 | https://github.com/apache/airflow/pull/31541 | c356e4fc22abc77f05aa136700094a882f2ca8c0 | e2da3151d49dae636cb6901f3d3e124a49cbf514 | "2023-05-24T20:37:27Z" | python | "2023-05-30T10:42:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,522 | ["airflow/api/common/airflow_health.py", "airflow/api_connexion/endpoints/health_endpoint.py", "airflow/www/views.py", "tests/api/__init__.py", "tests/api/common/__init__.py", "tests/api/common/test_airflow_health.py"] | `/health` endpoint missed when adding triggerer health status reporting | ### Apache Airflow version
main (development)
### What happened
https://github.com/apache/airflow/pull/27755 added the triggerer to the rest api health endpoint, but not the main one served on `/health`.
### What you think should happen instead
As documented [here](https://airflow.apache.org/docs/apache-airflow/2.6.1/administration-and-deployment/logging-monitoring/check-health.html#webserver-health-check-endpoint), the `/health` endpoint should include triggerer info like shown on `/api/v1/health`.
### How to reproduce
Compare `/api/v1/health` and `/health` responses.
### Operating System
mac os
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31522 | https://github.com/apache/airflow/pull/31529 | afa9ead4cea767dfc4b43e6f301e6204f7521e3f | f048aba47e079e0c81417170a5ac582ed00595c4 | "2023-05-24T20:08:34Z" | python | "2023-05-26T20:22:10Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,509 | ["airflow/cli/commands/user_command.py"] | Unable to delete user via CI | ### Apache Airflow version
2.6.1
### What happened
I am unable to delete users via the "delete command".
I am trying to create a new user delete the default admin user. so I tried running the command `airflow users delete -u admin`. Running this command gave the following error output:
```
Feature not implemented,tasks route disabled
/usr/local/lib/python3.10/site-packages/flask_limiter/extension.py:293 UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified. This is not recommended for production use. See: https://flask-limiter.readthedocs.io#configuring-a-storage-backend for documentation about configuring the storage backend.
/usr/local/lib/python3.10/site-packages/astronomer/airflow/version_check/update_checks.py:440 UserWarning: The setup method 'app_context_processor' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently.
Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it.
This warning will become an exception in Flask 2.3.
/usr/local/lib/python3.10/site-packages/flask/blueprints.py:673 UserWarning: The setup method 'record_once' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently.
Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it.
This warning will become an exception in Flask 2.3.
/usr/local/lib/python3.10/site-packages/flask/blueprints.py:321 UserWarning: The setup method 'record' can no longer be called on the blueprint 'UpdateAvailableView'. It has already been registered at least once, any changes will not be applied consistently.
Make sure all imports, decorators, functions, etc. needed to set up the blueprint are done before registering it.
This warning will become an exception in Flask 2.3.
/usr/local/lib/python3.10/site-packages/airflow/www/fab_security/sqla/manager.py:151 SAWarning: Object of type <Role> not in session, delete operation along 'User.roles' won't proceed
[2023-05-24T12:34:19.438+0000] {manager.py:154} ERROR - Remove Register User Error: (psycopg2.errors.ForeignKeyViolation) update or delete on table "ab_user" violates foreign key constraint "ab_user_role_user_id_fkey" on table "ab_user_role"
DETAIL: Key (id)=(4) is still referenced from table "ab_user_role".
[SQL: DELETE FROM ab_user WHERE ab_user.id = %(id)s]
[parameters: {'id': 4}]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
Failed to delete user
```
Deleting via the UI works fine.
The error also occurs for users that have a different role, such as Viewer.
### What you think should happen instead
No error should occur and the specified user should be deleted.
### How to reproduce
Run a Dag with a task that is a BashOperator that deletes the user e.g.:
```python
remove_default_admin_user_task = BashOperator(task_id="remove_default_admin_user",
bash_command="airflow users delete -u admin")
```
### Operating System
Docker containers run in Debian 10
### Versions of Apache Airflow Providers
Astronomer
### Deployment
Astronomer
### Deployment details
I use Astronomer 8.0.0
### Anything else
Always. It also occurs when ran inside the webserver docker container or when ran via the astro CLI with the command `astro dev run airflow users delete -u admin`.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31509 | https://github.com/apache/airflow/pull/31539 | 0fd42ff015be02d1a6a6c2e1a080f8267194b3a5 | 3ec66bb7cc686d060ff728bb6bf4d4e70e387ae3 | "2023-05-24T12:40:00Z" | python | "2023-05-25T19:45:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,499 | ["airflow/providers/databricks/operators/databricks_sql.py", "tests/providers/databricks/operators/test_databricks_sql.py"] | XCom - Attribute Error when serializing output of `Merge Into` databricks sql command. | ### Apache Airflow version
2.6.1
### What happened
After upgrading from Airflow 2.5.3 to 2.6.1 - the dag started to fail and it's related to XCom serialization.
I noticed that something has changed in regards to serializing XCom:```
key | Value Version 2.5.3 | Value Version 2.6.1 | result
-- | -- | -- | --
return_value | [[['Result', 'string', None, None, None, None, None]], []] | (['(num_affected_rows,bigint,None,None,None,None,None)', '(num_inserted_rows,bigint,None,None,None,None,None)'],[]) | ✅
return_value | [[['Result', 'string', None, None, None, None, None]], []] | (['(Result,string,None,None,None,None,None)'],[]) | ✅
return_value | [[['num_affected_rows', 'bigint', None, None, None, None, None], ['num_updated_rows', 'bigint', None, None, None, None, None], ['num_deleted_rows', 'bigint', None, None, None, None, None], ['num_inserted_rows', 'bigint', None, None, None, None, None]], [[1442, 605, 0, 837]]] | `AttributeError: __name__. Did you mean: '__ne__'?` | ❌
Query syntax that procuded an error: MERGE INTO https://docs.databricks.com/sql/language-manual/delta-merge-into.html
Stacktrace included below:
```
[2023-05-24, 01:12:43 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 2354, in xcom_push
XCom.set(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/session.py", line 73, in wrapper
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 237, in set
value = cls.serialize_value(
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/xcom.py", line 632, in serialize_value
return json.dumps(value, cls=XComEncoder).encode("UTF-8")
File "/usr/local/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 102, in encode
o = self.default(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/json.py", line 91, in default
return serialize(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 144, in serialize
return encode(classname, version, serialize(data, depth + 1))
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in serialize
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
return [serialize(d, depth + 1) for d in o]
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/serialization/serde.py", line 132, in serialize
qn = qualname(o)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/utils/module_loading.py", line 47, in qualname
return f"{o.__module__}.{o.__name__}"
File "/home/airflow/.local/lib/python3.10/site-packages/databricks/sql/types.py", line 161, in __getattr__
raise AttributeError(item)
AttributeError: __name__. Did you mean: '__ne__'?
```
### What you think should happen instead
Serialization should finish without an exception raised.
### How to reproduce
1. Dag file with declared operator:
```
task = DatabricksSqlOperator(
task_id="task",
databricks_conn_id="databricks_conn_id",
sql_endpoint_name="name",
sql="file.sql"
)
```
file.sql
```
MERGE INTO table_name
ON condition
WHEN MATCHED THEN UPDATE
WHEN NOT MATCHED THEN INSERT
```
https://docs.databricks.com/sql/language-manual/delta-merge-into.html
Query output is a table
num_affected_rows | num_updated_rows | num_deleted_rows | num_inserted_rows
-- | -- | -- | --
0 | 0 | 0 | 0
EDIT: It also happens for a select command
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-databricks==4.1.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31499 | https://github.com/apache/airflow/pull/31780 | 1aa9e803c26b8e86ab053cfe760153fc286e177c | 049c6184b730a7ede41db9406654f054ddc8cc5f | "2023-05-24T08:32:41Z" | python | "2023-06-08T10:49:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,460 | ["airflow/models/connection.py", "tests/models/test_connection.py"] | Add capability in Airflow connections to validate host | ### Apache Airflow version
2.6.1
### What happened
While creating connections in airflow, the form doesn't check for the correctness of the format of the host provided. For instance, we can proceed providing something like this which is not a valid url: `spark://k8s://100.68.0.1:443?deploy-mode=cluster`. It wont instantly fail but will return faulty hosts and other details if called later.
Motivation:
https://github.com/apache/airflow/pull/31376#discussion_r1200112106
### What you think should happen instead
The Connection form can have a validator which should check for these scenarios and report if there is an issue to save developers and users time later.
### How to reproduce
1. Go to airflow connections form
2. Fill in connection host as: `spark://k8s://100.68.0.1:443?deploy-mode=cluster`, other details can be anything
3. Create the connection
4. Run `airflow connections get <name>`
5. The host and schema will be wrong
### Operating System
Macos
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31460 | https://github.com/apache/airflow/pull/31465 | 232771869030d708c57f840aea735b18bd4bffb2 | 0560881f0eaef9c583b11e937bf1f79d13e5ac7c | "2023-05-22T09:50:46Z" | python | "2023-06-19T09:32:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,440 | ["airflow/example_dags/example_params_ui_tutorial.py", "airflow/www/static/js/trigger.js", "airflow/www/templates/airflow/trigger.html", "docs/apache-airflow/core-concepts/params.rst"] | Multi-Select, Text Proposals and Value Labels forTrigger Forms | ### Description
After the release for Airflow 2.6.0 I was integrating some forms into our setup and was missing some options selections - and some nice features to make selections user friendly.
I'd like do contribute some few features into the user forms:
* A select box option where proposals are made but user is not limited to hard `enum` list (`enum` is restricting user input to only the options provided)
* A multi-option pick list (because sometimes a single selection is just not enough
* Labels so that technical values used to control the DAG are different from what is presented as option to user
### Use case/motivation
After the initial release of UI trigger forms, add more features (incrementially)
### Related issues
Relates or potentially has a conflict with #31299, so this should be merged before.
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31440 | https://github.com/apache/airflow/pull/31441 | 1ac35e710afc6cf5ea4466714b18efacdc44e1f7 | c25251cde620481592392e5f82f9aa8a259a2f06 | "2023-05-20T15:31:28Z" | python | "2023-05-22T14:33:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,432 | ["airflow/providers/google/cloud/operators/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py", "tests/providers/google/cloud/triggers/test_bigquery.py"] | `BigQueryGetDataOperator`'s query job is bugged in deferrable mode | ### Apache Airflow version
main (development)
### What happened
1. When not providing `project_id` to `BigQueryGetDataOperator` in deferrable mode (`project_id=None`), the query generated by `generate_query` method is bugged, i.e.,:
````sql
from `None.DATASET.TABLE_ID` limit ...
````
2. `as_dict` param does not work `BigQueryGetDataOperator`.
### What you think should happen instead
1. When `project_id` is `None` - it should be removed from the query along with the trailing dot, i.e.,:
````sql
from `DATASET.TABLE_ID` limit ...
````
2. `as_dict` should be added to the serialization method of `BigQueryGetDataTrigger`.
### How to reproduce
1. Create a DAG file with `BigQueryGetDataOperator` defined as follows:
```python
BigQueryGetDataOperator(
task_id="bq_get_data_op",
# project_id="PROJECT_ID", <-- Not provided
dataset_id="DATASET",
table_id="TABLE",
use_legacy_sql=False,
deferrable=True
)
````
2. 1. Create a DAG file with `BigQueryGetDataOperator` defined as follows:
```python
BigQueryGetDataOperator(
task_id="bq_get_data_op",
project_id="PROJECT_ID",
dataset_id="DATASET",
table_id="TABLE",
use_legacy_sql=False,
deferrable=True,
as_dict=True
)
````
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
The `generate_query` method is not unit tested (which would have prevented it in the first place) - will be better to add one.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31432 | https://github.com/apache/airflow/pull/31433 | 0e8bff9c4ec837d086dbe49b3d583a8d23f49e0e | 0d6e626b050a860462224ad64dc5e9831fe8624d | "2023-05-19T18:20:45Z" | python | "2023-05-22T18:20:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,431 | ["airflow/migrations/versions/0125_2_6_2_add_onupdate_cascade_to_taskmap.py", "airflow/migrations/versions/0126_2_7_0_add_index_to_task_instance_table.py", "airflow/models/taskmap.py", "docs/apache-airflow/img/airflow_erd.sha256", "docs/apache-airflow/img/airflow_erd.svg", "docs/apache-airflow/migrations-ref.rst", "tests/models/test_taskinstance.py"] | Clearing a task flow function executed earlier with task changed to mapped task crashes scheduler | ### Apache Airflow version
main (development)
### What happened
Clearing a task flow function executed earlier with task changed to mapped task crashes scheduler. It seems TaskMap stored has a foreign key reference by map_index which needs to be cleared before execution.
```
airflow scheduler
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/cli_config.py:1001 DeprecationWarning: The namespace option in [kubernetes] has been moved to the namespace option in [kubernetes_executor] - the old setting has been used, but please update your config.
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:196 DeprecationWarning: The '[celery] task_adoption_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead.
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:201 DeprecationWarning: The worker_pods_pending_timeout option in [kubernetes] has been moved to the worker_pods_pending_timeout option in [kubernetes_executor] - the old setting has been used, but please update your config.
/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py:206 DeprecationWarning: The '[kubernetes_executor] worker_pods_pending_timeout' config option is deprecated. Please update your config to use '[scheduler] task_queued_timeout' instead.
[2023-05-19T23:41:07.907+0530] {executor_loader.py:114} INFO - Loaded executor: SequentialExecutor
[2023-05-19 23:41:07 +0530] [15527] [INFO] Starting gunicorn 20.1.0
[2023-05-19 23:41:07 +0530] [15527] [INFO] Listening at: http://[::]:8793 (15527)
[2023-05-19 23:41:07 +0530] [15527] [INFO] Using worker: sync
[2023-05-19 23:41:07 +0530] [15528] [INFO] Booting worker with pid: 15528
[2023-05-19T23:41:07.952+0530] {scheduler_job_runner.py:789} INFO - Starting the scheduler
[2023-05-19T23:41:07.952+0530] {scheduler_job_runner.py:796} INFO - Processing each file at most -1 times
[2023-05-19T23:41:07.954+0530] {scheduler_job_runner.py:1542} INFO - Resetting orphaned tasks for active dag runs
[2023-05-19 23:41:07 +0530] [15529] [INFO] Booting worker with pid: 15529
[2023-05-19T23:41:08.567+0530] {scheduler_job_runner.py:853} ERROR - Exception when executing SchedulerJob._run_scheduler_loop
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 836, in _execute
self._run_scheduler_loop()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 970, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1052, in _do_scheduling
callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 90, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 99, in wrapped_function
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1347, in _schedule_all_dag_runs
callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs]
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2811, in __iter__
return self._iter().__iter__()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter
result = self.session.execute(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1669, in execute
conn = self._connection_for_bind(bind, close_with_result=True)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1519, in _connection_for_bind
return self._transaction._connection_for_bind(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
self._assert_active()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
raise sa_exc.PendingRollbackError(
sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_map_task_instance_fkey" on table "task_map"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(bash_simple, get_command, manual__2023-05-18T13:54:01.345016+00:00, -1) is still referenced from table "task_map".
[SQL: UPDATE task_instance SET map_index=%(map_index)s, updated_at=%(updated_at)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s]
[parameters: {'map_index': 0, 'updated_at': datetime.datetime(2023, 5, 19, 18, 11, 8, 90512, tzinfo=Timezone('UTC')), 'task_instance_dag_id': 'bash_simple', 'task_instance_task_id': 'get_command', 'task_instance_run_id': 'manual__2023-05-18T13:54:01.345016+00:00', 'task_instance_map_index': -1}]
(Background on this error at: http://sqlalche.me/e/14/gkpj) (Background on this error at: http://sqlalche.me/e/14/7s2a)
[2023-05-19T23:41:08.572+0530] {scheduler_job_runner.py:865} INFO - Exited execute loop
Traceback (most recent call last):
File "/home/karthikeyan/stuff/python/airflow/.env/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/cli_config.py", line 51, in command
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/cli.py", line 112, in wrapper
return f(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 77, in scheduler
_run_scheduler_job(job_runner, skip_serve_logs=args.skip_serve_logs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/cli/commands/scheduler_command.py", line 42, in _run_scheduler_job
run_job(job=job_runner.job, execute_callable=job_runner._execute)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/session.py", line 76, in wrapper
return func(*args, session=session, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/job.py", line 284, in run_job
return execute_job(job, execute_callable=execute_callable)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/job.py", line 313, in execute_job
ret = execute_callable()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 836, in _execute
self._run_scheduler_loop()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 970, in _run_scheduler_loop
num_queued_tis = self._do_scheduling(session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1052, in _do_scheduling
callback_tuples = self._schedule_all_dag_runs(guard, dag_runs, session)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 90, in wrapped_function
for attempt in run_with_db_retries(max_retries=retries, logger=logger, **retry_kwargs):
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __iter__
do = self.iter(retry_state=retry_state)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/tenacity/__init__.py", line 349, in iter
return fut.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/utils/retries.py", line 99, in wrapped_function
return func(*args, **kwargs)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/airflow/jobs/scheduler_job_runner.py", line 1347, in _schedule_all_dag_runs
callback_tuples = [(run, self._schedule_dag_run(run, session=session)) for run in dag_runs]
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2811, in __iter__
return self._iter().__iter__()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter
result = self.session.execute(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1669, in execute
conn = self._connection_for_bind(bind, close_with_result=True)
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1519, in _connection_for_bind
return self._transaction._connection_for_bind(
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 721, in _connection_for_bind
self._assert_active()
File "/home/karthikeyan/stuff/python/airflow/.env/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 601, in _assert_active
raise sa_exc.PendingRollbackError(
sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (psycopg2.errors.ForeignKeyViolation) update or delete on table "task_instance" violates foreign key constraint "task_map_task_instance_fkey" on table "task_map"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(bash_simple, get_command, manual__2023-05-18T13:54:01.345016+00:00, -1) is still referenced from table "task_map".
[SQL: UPDATE task_instance SET map_index=%(map_index)s, updated_at=%(updated_at)s WHERE task_instance.dag_id = %(task_instance_dag_id)s AND task_instance.task_id = %(task_instance_task_id)s AND task_instance.run_id = %(task_instance_run_id)s AND task_instance.map_index = %(task_instance_map_index)s]
[parameters: {'map_index': 0, 'updated_at': datetime.datetime(2023, 5, 19, 18, 11, 8, 90512, tzinfo=Timezone('UTC')), 'task_instance_dag_id': 'bash_simple', 'task_instance_task_id': 'get_command', 'task_instance_run_id': 'manual__2023-05-18T13:54:01.345016+00:00', 'task_instance_map_index': -1}]
(Background on this error at: http://sqlalche.me/e/14/gkpj) (Background on this error at: http://sqlalche.me/e/14/7s2a)
```
### What you think should happen instead
_No response_
### How to reproduce
1. Create the dag with `command = get_command(1, 1)` and trigger a dagrun waiting for it to complete
2. Now change this to `command = get_command.partial(arg1=[1]).expand(arg2=[1, 2, 3, 4])` so that the task is now mapped.
3. Clear the existing task that causes the scheduler to crash.
```python
import datetime, time
from airflow.operators.bash import BashOperator
from airflow import DAG
from airflow.decorators import task
with DAG(
dag_id="bash_simple",
start_date=datetime.datetime(2022, 1, 1),
schedule=None,
catchup=False,
) as dag:
@task
def get_command(arg1, arg2):
for i in range(10):
time.sleep(1)
print(i)
return ["echo hello"]
command = get_command(1, 1)
# command = get_command.partial(arg1=[1]).expand(arg2=[1, 2, 3, 4])
t1 = BashOperator.partial(task_id="bash").expand(bash_command=command)
if __name__ == "__main__":
dag.test()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31431 | https://github.com/apache/airflow/pull/31445 | adf0cae48ad4e87612c00fe9facffca9b5728e7d | f6bb4746efbc6a94fa17b6c77b31d9fb17305ffc | "2023-05-19T18:12:39Z" | python | "2023-05-24T10:54:45Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,420 | ["airflow/api_connexion/openapi/v1.yaml", "airflow/api_connexion/schemas/task_instance_schema.py", "airflow/www/static/js/types/api-generated.ts", "tests/api_connexion/endpoints/test_task_instance_endpoint.py"] | allow other states than success/failed in tasks by REST API | ### Apache Airflow version
main (development)
### What happened
see also: https://github.com/apache/airflow/issues/25463
From the conversation there, it sounds like it's intended to be possible to set a task to "skipped" via REST API, but it's not.
Instead the next best thing we have is marking as success & adding a note.
### What you think should happen instead
I see no reason for users to not just be able to set tasks to "skipped". I could imagine reasons to avoid "queued", and some other more "internal" states, but skipped makes perfect sense to me for
- tasks that failed and got fixed externally
- tasks that failed but are now irrelevant because of a newer run
in case you still want to be able to see those in the future (actually, a custom state would be even better)
### How to reproduce
```python
# task is just a dictionary with the right values for below
r = requests.patch(
f"{base_url}/api/v1/dags/{task['dag_id']}/dagRuns/{task['dag_run_id']}/taskInstances/{task['task_id']}",
json={
"new_state": "skipped",
},
headers={"Authorization": token},
)
```
-> r.json() gives
`{'detail': "'skipped' is not one of ['success', 'failed'] - 'new_state'", 'status': 400, 'title': 'Bad Request', 'type': 'http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/stable-rest-api-ref.html#section/Errors/BadRequest'}`
### Operating System
/
### Versions of Apache Airflow Providers
/
### Deployment
Astronomer
### Deployment details
/
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31420 | https://github.com/apache/airflow/pull/31421 | 233663046d5210359ce9f4db2fe3db4f5c38f6ee | fba6f86ed7e59c166d0cf7717f1734ae30ba4d9c | "2023-05-19T14:28:31Z" | python | "2023-06-08T20:57:17Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,407 | ["airflow/jobs/scheduler_job_runner.py"] | Future DagRun rarely triggered by Race Condition when max_active_runs has reached its upper limit | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
The scheduler rarely triggers a DagRun to be executed in the future.
Here are the conditions as I understand them.
- max_active_runs is set and upper limit is reached
- The preceding DagRun completes very slightly earlier than the following DagRun
Details in "Anything else".
### What you think should happen instead
DagRun should wait until scheduled
### How to reproduce
I have confirmed reproduction in Airflow 2.2.2 with the following code.
I reproduced it in my environment after running it for about half a day.
``` python
import copy
import logging
import time
from datetime import datetime, timedelta
import pendulum
from airflow import DAG, AirflowException
from airflow.sensors.python import PythonSensor
from airflow.utils import timezone
logger = logging.getLogger(__name__)
# very small min_file_process_interval may help to reproduce more. e.g. min_file_process_interval=3
def create_dag(interval):
with DAG(
dag_id=f"example_reproduce_{interval:0>2}",
schedule_interval=f"*/{interval} * * * *",
start_date=datetime(2021, 1, 1),
catchup=False,
max_active_runs=2,
tags=["example_race_condition"],
) as dag:
target_s = 10
def raise_if_future(context):
now = timezone.utcnow() - timedelta(seconds=30)
if context["data_interval_start"] > now:
raise AirflowException("DagRun supposed to be triggered in the future triggered")
def wait_sync():
now_dt = pendulum.now()
if now_dt.minute % (interval * 2) == 0:
# wait until target time to synchronize end time with the preceding job
target_dt = copy.copy(now_dt).replace(second=target_s + 2)
wait_sec = (target_dt - now_dt).total_seconds()
logger.info(f"sleep {now_dt} -> {target_dt} in {wait_sec} seconds")
if wait_sec > 0:
time.sleep(wait_sec)
return True
PythonSensor(
task_id="t2",
python_callable=wait_sync,
# To avoid getting stuck in SequentialExecutor, try to re-poke after the next job starts
poke_interval=interval * 60 + target_s,
mode="reschedule",
pre_execute=raise_if_future,
)
return dag
for i in [1, 2]:
globals()[i] = create_dag(i)
```
### Operating System
Amazon Linux 2
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
MWAA 2.2.2
### Anything else
The assumed flow and the associated actual query logs for the case max_active_runs=2 are shown below.
**The assumed flow**
1. The first DagRun (DR1) starts
1. The subsequent DagRun (DR2) starts
1. DR2 completes; The scheduler set `next_dagrun_create_after=null` if max_active_runs is exceeded
- https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L931
1. DR1 completes; The scheduler calls dag_model.calculate_dagrun_date_fields() in SchedulerJobRunner._schedule_dag_run(). The session is NOT committed yet
- note: the result of `calculate_dagrun_date_fields` is the old DR1-based value from `dag.get_run_data_interval(DR"2")`.
- https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L1017
1. DagFileProcessorProcess modifies next_dagrun_create_after
- note: the dag record fetched in step 4 are not locked, so the `Processor` can select it and update it.
- https://github.com/apache/airflow/blob/2.2.2/airflow/dag_processing/processor.py#L646
1. The scheduler reflects the calculation result of DR1 to DB by `guard.commit()`
- note: Only the `next_dagrun_create_after` column set to null in step 2 is updated because sqlalchemy only updates the difference between the record retrieved in step 4 and the calculation result
- https://github.com/apache/airflow/blob/2.2.2/airflow/jobs/scheduler_job.py#L795
1. The scheduler triggers a future DagRun because the current time satisfies next_dagrun_create_after updated in step 6
**The associated query log**
``` sql
bb55c5b0bdce: /# grep "future_dagrun_00" /var/lib/postgresql/data/log/postgresql-2023-03-08_210056.log | grep "next_dagrun"
2023-03-08 22: 00: 01.678 UTC[57378] LOG: statement: UPDATE dag SET next_dagrun_create_after = NULL WHERE dag.dag_id = 'future_dagrun_0' # set in step 3
2023-03-08 22: 00: 08.162 UTC[57472] LOG: statement: UPDATE dag SET last_parsed_time = '2023-03-08T22:00:07.683786+00:00':: timestamptz, next_dagrun = '2023-03-08T22:00:00+00:00':: timestamptz, next_dagrun_data_interval_start = '2023-03-08T22:00:00+00:00':: timestamptz, next_dagrun_data_interval_end = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_create_after = '2023-03-08T23:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 5
2023-03-08 22: 00: 09.137 UTC[57475] LOG: statement: UPDATE dag SET next_dagrun_create_after = '2023-03-08T22:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 6
2023-03-08 22: 00: 10.418 UTC[57479] LOG: statement: UPDATE dag SET next_dagrun = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_data_interval_start = '2023-03-08T23:00:00+00:00':: timestamptz, next_dagrun_data_interval_end = '2023-03-09T00:00:00+00:00':: timestamptz, next_dagrun_create_after = '2023-03-09T00:00:00+00:00'::timestamptz WHERE dag.dag_id = 'future_dagrun_00' # set in step 7
```
From what I've read of the relevant code in the latest v2.6.1, I believe the problem continues.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31407 | https://github.com/apache/airflow/pull/31414 | e43206eb2e055a78814fcff7e8c35c6fd9c11e85 | b53e2aeefc1714d306f93e58d211ad9d52356470 | "2023-05-19T09:07:10Z" | python | "2023-08-08T12:22:13Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,387 | ["airflow/providers/google/cloud/operators/kubernetes_engine.py", "tests/providers/google/cloud/operators/test_kubernetes_engine.py"] | GKEStartPodOperator cannot connect to Private IP after upgrade to 2.6.x | ### Apache Airflow version
2.6.1
### What happened
After upgrading to 2.6.1, GKEStartPodOperator stopped creating pods. According with release notes we created a specific gcp connection. But connection defaults to GKE Public endpoint (in error message masked as XX.XX.XX.XX) instead of private IP which is best since our cluster do not have public internet access.
[2023-05-17T07:02:33.834+0000] {connectionpool.py:812} WARNING - Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f0e47049ba0>, 'Connection to XX.XX.XX.XX timed out. (connect timeout=None)')': /api/v1/namespaces/airflow/pods?labelSelector=dag_id%3Dmytask%2Ckubernetes_pod_operator%3DTrue%2Crun_id%3Dscheduled__2023-05-16T0700000000-8fb0e9fa9%2Ctask_id%3Dmytask%2Calready_checked%21%3DTrue%2C%21airflow-sa
Seems like with this change "use_private_ip" has been deprecated, what would be the workaround in this case then to connect using private endpoint?
Also doc has not been updated to reflect this change in behaviour: https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/kubernetes_engine.html#using-with-private-cluster
### What you think should happen instead
There should still be an option to connect using previous method with option "--private-ip" so API calls to Kubernetes call the private endpoint of GKE Cluster.
### How to reproduce
1. Create DAG file with GKEStartPodOperator.
2. Deploy said DAG in an environment with no access tu public internet.
### Operating System
cos_coaintainerd
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-google==8.11.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31387 | https://github.com/apache/airflow/pull/31391 | 45b6cfa138ae23e39802b493075bd5b7531ccdae | c082aec089405ed0399cfee548011b0520be0011 | "2023-05-18T13:53:30Z" | python | "2023-05-23T11:40:07Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,384 | ["dev/breeze/src/airflow_breeze/utils/run_utils.py"] | Breeze asset compilation causing OOM on dev-mode | ### Apache Airflow version
main (development)
### What happened
asset compilation background thread is not killed when running `stop_airflow` or `breeze stop`.
Webpack process takes a lot of memory, each `start-airflow` starts 4-5 of them.
After a few breeze start, we end up with 15+ webpack background processes that takes more than 20G RAM
### What you think should happen instead
`run_compile_www_assets` should stop when running `stop_airflow` from tmux. It looks like it spawn a `compile-www-assets-dev` pre-commit in a subprocess, that doesn't get killed when stopping breeze
### How to reproduce
```
breeze start-airflow --backend postgres --python 3.8 --dev-mode
# Wait for tmux session to start
breeze_stop
breeze start-airflow --backend postgres --python 3.8 --dev-mode
# Wait for tmux session to start
breeze_stop
# do a couple more if needed
```
Open tmux and monitor your memory, and specifically webpack processes.
### Operating System
Ubuntu 20.04.6 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31384 | https://github.com/apache/airflow/pull/31403 | c63b7774cdba29394ec746b381f45e816dcb0830 | ac00547512f33b1222d699c7857108360d99b233 | "2023-05-18T11:42:08Z" | python | "2023-05-19T09:58:16Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,365 | ["airflow/www/templates/airflow/dags.html"] | The `Next Run` column name and tooltip is misleading | ### Description
> Expected date/time of the next DAG Run, or for dataset triggered DAGs, how many datasets have been updated since the last DAG Run
"Expected date/time of the next DAG Run" to me sounds like Run After.
Should the tooltip indicate something along the lines of "start interval of the next dagrun" or maybe the header Next Run is outdated? Something like "Next Data Interval"?
On the same vein, "Last Run" is equally as confusing. The header can be "Last Data Interval" in additional to a tool tip that describe it is the data interval start of the last dagrun.
### Use case/motivation
Users confused "Next Run" as when the next dagrun will be queued and ran and does not interpret it as the next dagrun's data interval start.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31365 | https://github.com/apache/airflow/pull/31467 | f1d484c46c18a83e0b8bc010044126dafe4467bc | 7db42fe6655c28330e80b8a062ef3e07968d6e76 | "2023-05-17T17:56:37Z" | python | "2023-06-01T15:54:42Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,351 | ["airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Changing task from unmapped to mapped task with task instance note and task reschedule | ### Apache Airflow version
main (development)
### What happened
Changing a non-mapped task with task instance note and task reschedule to a mapped task crashes scheduler when the task is cleared for rerun. Related commit where a similar fix was done.
commit a770edfac493f3972c10a43e45bcd0e7cfaea65f
Author: Ephraim Anierobi <splendidzigy24@gmail.com>
Date: Mon Feb 20 20:45:25 2023 +0100
Fix Scheduler crash when clear a previous run of a normal task that is now a mapped task (#29645)
The fix was to clear the db references of the taskinstances in XCom, RenderedTaskInstanceFields
and TaskFail. That way, we are able to run the mapped tasks
### What you think should happen instead
_No response_
### How to reproduce
1. Create below dag file with BashOperator non-mapped.
2. Schedule a dag run and wait for it to finish.
3. Add a task instance note to bash operator.
4. Change t1 to ` t1 = BashOperator.partial(task_id="bash").expand(bash_command=command)` and return `["echo hello"]` from get_command.
5. Restart the scheduler and clear the task.
6. scheduler crashes on trying to use map_index though foreign key reference exists to task instance note and task reschedule.
```python
import datetime
from airflow.operators.bash import BashOperator
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="bash_simple", start_date=datetime.datetime(2022, 1, 1), schedule=None, catchup=False) as dag:
@task
def get_command(arg1, arg2):
return "echo hello"
command = get_command(1, 1)
t1 = BashOperator(task_id="bash", bash_command=command)
if __name__ == '__main__':
dag.test()
```
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31351 | https://github.com/apache/airflow/pull/31352 | b1ea3f32f9284c6f53bab343bdf79ab3081276a8 | f82246abe9491a49701abdb647be001d95db7e9f | "2023-05-17T11:59:30Z" | python | "2023-05-31T03:08:47Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,337 | ["airflow/providers/google/cloud/hooks/gcs.py"] | GCSHook support for cacheControl | ### Description
When a file is uploaded to GCS, [by default](https://cloud.google.com/storage/docs/metadata#cache-control), public files will get `Cache-Control: public, max-age=3600`.
I've tried setting `cors` for the whole bucket (didn't work) and setting `Cache-Control` on individual file (disappears on file re-upload from airflow)
Setting `metadata` for GCSHook is for different field (can't be used to set cache-control)
### Use case/motivation
Allow GCSHook to set cache control rather than overriding the `upload` function
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31337 | https://github.com/apache/airflow/pull/31338 | ba3665f76a2205bad4553ba00537026a1346e9ae | 233663046d5210359ce9f4db2fe3db4f5c38f6ee | "2023-05-17T04:51:33Z" | python | "2023-06-08T20:51:38Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,335 | ["airflow/providers/cncf/kubernetes/triggers/pod.py", "tests/providers/cncf/kubernetes/triggers/test_pod.py", "tests/providers/google/cloud/triggers/test_kubernetes_engine.py"] | KPO deferable "random" false fail | ### Apache Airflow version
2.6.1
### What happened
With the KPO and only in deferrable I have "random" false fail
the dag
```python
from pendulum import today
from airflow import DAG
from airflow.providers.cncf.kubernetes.operators.pod import KubernetesPodOperator
dag = DAG(
dag_id="kubernetes_dag",
schedule_interval="0 0 * * *",
start_date=today("UTC").add(days=-1)
)
with dag:
cmd = "echo toto && sleep 22 && echo finish"
KubernetesPodOperator.partial(
task_id="task-one",
namespace="default",
kubernetes_conn_id="kubernetes_default",
config_file="/opt/airflow/include/.kube/config", # bug of deferrable corrected in 6.2.0
name="airflow-test-pod",
image="alpine:3.16.2",
cmds=["sh", "-c", cmd],
is_delete_operator_pod=True,
deferrable=True,
get_logs=True,
).expand(env_vars=[{"a": "a"} for _ in range(8)])
```

the log of the task in error :
[dag_id=kubernetes_dag_run_id=scheduled__2023-05-16T00_00_00+00_00_task_id=task-one_map_index=2_attempt=1.log](https://github.com/apache/airflow/files/11492973/dag_id.kubernetes_dag_run_id.scheduled__2023-05-16T00_00_00%2B00_00_task_id.task-one_map_index.2_attempt.1.log)
### What you think should happen instead
KPO should not fail ( in deferable ) if the container succesfully run in K8S
### How to reproduce
If I remove the **sleep 22** from the cmd then I do not see any more random task fails
### Operating System
ubuntu 22.04
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==6.1.0
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31335 | https://github.com/apache/airflow/pull/31348 | 57b7ba16a3d860268f03cd2619e5d029c7994013 | 8f5de83ee68c28100efc085add40ae4702bc3de1 | "2023-05-17T00:06:22Z" | python | "2023-06-29T14:55:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,311 | ["chart/files/pod-template-file.kubernetes-helm-yaml", "tests/charts/airflow_aux/test_pod_template_file.py"] | Worker pod template file doesn't have option to add priorityClassName | ### Apache Airflow version
2.6.0
### What happened
Worker pod template file doesn't have an option to add priorityClassName.
### What you think should happen instead
Airflow workers deployment however has the option to add it via the override airflow.workers.priorityClassName . We should reuse this for the worker pod template file too.
### How to reproduce
Trying to add a priorityClassName for airflow worker pod doesnt work. Unless we override the whole worker pod template with our own. But that is not preferrable as we will need to duplicate a lot of the existing template file.
### Operating System
Rhel8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31311 | https://github.com/apache/airflow/pull/31328 | fbb095605ab009869ef021535c16b62a3d18a562 | 2c9ce803d744949674e4ec9ac88f73ad0a361399 | "2023-05-16T07:33:59Z" | python | "2023-06-01T00:27:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,304 | ["docs/apache-airflow/administration-and-deployment/logging-monitoring/logging-tasks.rst"] | Outdated 'airflow info' output in Logging for Tasks page | ### What do you see as an issue?
https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/logging-tasks.html#troubleshooting
The referenced `airflow info` format is very outdated.
### Solving the problem
Current output format is something like this:
```
Apache Airflow
version | 2.7.0.dev0
executor | LocalExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
sql_alchemy_conn | postgresql+psycopg2://postgres:airflow@postgres/airflow
dags_folder | /files/dags
plugins_folder | /root/airflow/plugins
base_log_folder | /root/airflow/logs
remote_base_log_folder |
System info
OS | Linux
architecture | arm
uname | uname_result(system='Linux', node='fe54afd888cd', release='5.15.68-0-virt', version='#1-Alpine SMP Fri, 16 Sep
| 2022 06:29:31 +0000', machine='aarch64', processor='')
locale | ('en_US', 'UTF-8')
python_version | 3.7.16 (default, May 3 2023, 09:44:48) [GCC 10.2.1 20210110]
python_location | /usr/local/bin/python
Tools info
git | git version 2.30.2
ssh | OpenSSH_8.4p1 Debian-5+deb11u1, OpenSSL 1.1.1n 15 Mar 2022
kubectl | NOT AVAILABLE
gcloud | NOT AVAILABLE
cloud_sql_proxy | NOT AVAILABLE
mysql | mysql Ver 15.1 Distrib 10.5.19-MariaDB, for debian-linux-gnu (aarch64) using EditLine wrapper
sqlite3 | 3.34.1 2021-01-20 14:10:07 10e20c0b43500cfb9bbc0eaa061c57514f715d87238f4d835880cd846b9ealt1
psql | psql (PostgreSQL) 15.2 (Debian 15.2-1.pgdg110+1)
Paths info
airflow_home | /root/airflow
system_path | /files/bin/:/opt/airflow/scripts/in_container/bin/:/root/.local/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:
| /usr/sbin:/usr/bin:/sbin:/bin:/opt/airflow
python_path | /usr/local/bin:/opt/airflow:/usr/local/lib/python37.zip:/usr/local/lib/python3.7:/usr/local/lib/python3.7/lib-dynl
| oad:/usr/local/lib/python3.7/site-packages:/files/dags:/root/airflow/config:/root/airflow/plugins
airflow_on_path | True
Providers info
[too long to include]
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31304 | https://github.com/apache/airflow/pull/31336 | 6d184d3a589b988c306aa3614e0f83e514b3f526 | fc4f37b105ca0f03de7cc49ab4f00751287ae145 | "2023-05-16T01:46:27Z" | python | "2023-05-18T07:44:53Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,238 | ["airflow/providers/discord/notifications/__init__.py", "airflow/providers/discord/notifications/discord.py", "airflow/providers/discord/provider.yaml", "tests/providers/discord/notifications/__init__.py", "tests/providers/discord/notifications/test_discord.py"] | Discord notification | ### Description
The new [Slack notification](https://airflow.apache.org/docs/apache-airflow-providers-slack/stable/notifications/slack_notifier_howto_guide.html) feature allows users to send messages to a slack channel using the various [on_*_callbacks](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/logging-monitoring/callbacks.html) at both the DAG level and Task level.
However, this solution needs a little more implementation ([Webhook](https://airflow.apache.org/docs/apache-airflow-providers-discord/stable/_api/airflow/providers/discord/hooks/discord_webhook/index.html)) to perform notifications on Discord.
### Use case/motivation
Send Task/DAG status or other messages as notifications to a discord channel.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31238 | https://github.com/apache/airflow/pull/31273 | 3689cee485215651bdb5ef434f24ab8774995a37 | bdfebad5c9491234a78453856bd8c3baac98f75e | "2023-05-12T03:31:07Z" | python | "2023-06-16T05:49:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,236 | ["docs/apache-airflow/core-concepts/dags.rst"] | The @task.branch inside the dags.html seems to be incorrect | ### What do you see as an issue?
In the documentation page [https://airflow.apache.org/docs/apache-airflow/2.6.0/core-concepts/dags.html#branching)](https://airflow.apache.org/docs/apache-airflow/2.6.0/core-concepts/dags.html#branching), there is an incorrect usage of the @task.branch decorator.
``` python
@task.branch(task_id="branch_task")
def branch_func(ti):
xcom_value = int(ti.xcom_pull(task_ids="start_task"))
if xcom_value >= 5:
return "continue_task"
elif xcom_value >= 3:
return "stop_task"
else:
return None
branch_op = branch_func()
```
This code snippet is incorrect as it attempts to initialize branch_func without providing the required parameter.
### Solving the problem
The correct version maybe is like that
```python
def branch_func(**kwargs):
ti: TaskInstance = kwargs["ti"]
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31236 | https://github.com/apache/airflow/pull/31265 | d6051fd10a0949264098af23ce74c76129cfbcf4 | d59b0533e18c7cf0ff17f8af50731d700a2e4b4d | "2023-05-12T01:53:07Z" | python | "2023-05-13T12:21:56Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,200 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/jobs/job.py", "airflow/jobs/scheduler_job_runner.py", "newsfragments/31277.significant.rst", "tests/jobs/test_base_job.py"] | Constant "The scheduler does not appear to be running" warning on the UI following 2.6.0 upgrade | ### Apache Airflow version
2.6.0
### What happened
Ever since we upgraded to Airflow 2.6.0 from 2.5.2, we have seen that there is a warning stating "The scheduler does not appear to be running" intermittently.
This warning goes away by simply refreshing the page. And this conforms with our findings that the scheduler has not been down at all, at any point. By calling the /health point constantly, we can get it to show an "unhealthy" status:
These are just approx. 6 seconds apart:
```
{"metadatabase": {"status": "healthy"}, "scheduler": {"latest_scheduler_heartbeat": "2023-05-11T07:42:36.857007+00:00", "status": "healthy"}}
{"metadatabase": {"status": "healthy"}, "scheduler": {"latest_scheduler_heartbeat": "2023-05-11T07:42:42.409344+00:00", "status": "unhealthy"}}
```
This causes no operational issues, but it is misleading for end-users. What could be causing this?
### What you think should happen instead
The warning should not be shown unless the last heartbeat was at least 30 sec earlier (default config).
### How to reproduce
There are no concrete steps to reproduce it, but the warning appears in the UI after a few seconds of browsing around, or simply refresh the /health endpoint constantly.
### Operating System
Debian GNU/Linux 11
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==6.0.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-microsoft-psrp==2.2.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-oracle==3.0.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.5
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Deployed on AKS with helm
### Anything else
None more than in the description above.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31200 | https://github.com/apache/airflow/pull/31277 | 3193857376bc2c8cd2eb133017be1e8cbcaa8405 | f366d955cd3be551c96ad7f794e0b8525900d13d | "2023-05-11T07:51:57Z" | python | "2023-05-15T08:31:14Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,186 | ["airflow/www/static/js/dag/details/FilterTasks.tsx", "airflow/www/static/js/dag/details/dagRun/ClearRun.tsx", "airflow/www/static/js/dag/details/dagRun/MarkRunAs.tsx", "airflow/www/static/js/dag/details/index.tsx", "airflow/www/static/js/dag/details/taskInstance/taskActions/ClearInstance.tsx"] | Problems after redesign grid view | ### Apache Airflow version
2.6.0
### What happened
The changes in #30373 have had some unintended consequences.
- The clear task button can now go of screen if the dag / task name is long enough. This is rather unfortunate since it is by far the most important button to fix issues (hence the reason it is taking up a lot of real estate)
- Above issue is exacerbated by the fact that the task name can also push the grid off screen as well. I now have dags where I can see grid or the clear state button, but not both.
- Downstream and Recursive don't seem to be selected default anymore for the clear task button. For some reason recursive is only selected for the latest task (maybe this was already the case?).
The first two are an annoyance, the last one is preventing us from updating to 2.6.0
### What you think should happen instead
- Downstream should be selected by default again. (and possibly Recursive)
- The clear task button should *always* be visible, no matter how implausibly small the viewport is.
- Ideally long taks names should no longer hide the grid.
### How to reproduce
To reproduce just make a dag with a long name with some tasks with long names and open the grid view on a small screen.
### Operating System
unix
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31186 | https://github.com/apache/airflow/pull/31232 | d1fe67184da26fb0bca2416e26f321747fa4aa5d | 03b04a3d54c0c2aff9873f88de116fad49f90600 | "2023-05-10T15:26:49Z" | python | "2023-05-12T14:27:03Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,183 | ["airflow/providers/cncf/kubernetes/operators/spark_kubernetes.py", "tests/providers/cncf/kubernetes/operators/test_spark_kubernetes.py"] | SparkKubernetesSensor: 'None' has no attribute 'metadata' | ### Apache Airflow version
2.6.0
### What happened
After upgrading to 2.6.0 version, pipelines with SparkKubernetesOperator -> SparkKubernetesSensor stopped working correctly.
[this PR](https://github.com/apache/airflow/pull/29977) introduces some enhancement into Spark Kubernetes logic, now SparkKubernetesOperator receives the log from spark pods (which is great), but it doesn't monitor the status of a pod, which means if spark application fails - a task in Airflow finishes successfully.
On the other hand, using previous pipelines (Operator + Sensor) is impossible now, cause SparkKubernetesSensor fails with `jinja2.exceptions.UndefinedError: 'None' has no attribute 'metadata'` as SparkKubernetesOperator is no longer pushing info to xcom.
### What you think should happen instead
Old pipelines should be compatible with Airflow 2.6.0, even though the log would be retrieved in two places - operator and sensor.
OR remove the sensor completely and implement all the functionality in the operator (log + status)
### How to reproduce
Create a DAG with two operators
```
t1 = SparkKubernetesOperator(
kubernetes_conn_id='common/kubernetes_default',
task_id=f"task-submit",
namespace="namespace",
application_file="spark-applications/app.yaml",
do_xcom_push=True,
dag=dag,
)
t2 = SparkKubernetesSensor(
task_id=f"task-sensor",
namespace="namespace",
application_name=f"{{{{ task_instance.xcom_pull(task_ids='task-submit')['metadata']['name'] }}}}",
dag=dag,
attach_log=True,
)
```
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-apache-spark==4.0.1
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==6.0.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-microsoft-psrp==2.2.0
apache-airflow-providers-microsoft-winrm==3.1.1
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.5
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
apache-airflow-providers-telegram==4.0.0
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31183 | https://github.com/apache/airflow/pull/31798 | 771362af4784f3d913d6c3d3b44c78269280a96e | 6693bdd72d70989f4400b5807e2945d814a83b85 | "2023-05-10T11:42:40Z" | python | "2023-06-27T20:55:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,180 | ["docs/apache-airflow/administration-and-deployment/listeners.rst"] | Plugin for listeners - on_dag_run_running hook ignored | ### Apache Airflow version
2.6.0
### What happened
I created a plugin for custom listeners, the task level listeners works fine, but the dag level listeners are not triggered.
The [docs](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/listeners.html) states that listeners defined in `airflow/listeners/spec` should be supported.
```
@hookimpl
def on_task_instance_failed(previous_state: TaskInstanceState, task_instance: TaskInstance, session):
"""
This method is called when task state changes to FAILED.
Through callback, parameters like previous_task_state, task_instance object can be accessed.
This will give more information about current task_instance that has failed its dag_run,
task and dag information.
"""
print("This works fine")
@hookimpl
def on_dag_run_failed(dag_run: DagRun, msg: str):
"""
This method is called when dag run state changes to FAILED.
"""
print("This is not called!")
```
### What you think should happen instead
The dag specs defined `airflow/listeners/spec/dagrun.py` should be working
### How to reproduce
Create a plugin and add the two hooks into a listeners.
### Operating System
linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31180 | https://github.com/apache/airflow/pull/32269 | bc3b2d16d3563d5b9bccd283db3f9e290d1d823d | ab2c861dd8a96f22b0fda692368ce9b103175322 | "2023-05-10T09:41:08Z" | python | "2023-07-04T20:57:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,156 | ["setup.cfg", "setup.py"] | Searching task instances by state doesn't work | ### Apache Airflow version
2.6.0
### What happened
After specifying a state such as "Equal to" "failed", the search doesn't return anything but resetting the whole page (the specified filter is gone)
https://github.com/apache/airflow/assets/14293802/5fb7f550-c09f-4040-963f-76dc0a2c1a53
### What you think should happen instead
_No response_
### How to reproduce
Go to "Browse" tab -> click "Task Instances" -> "Add Filter" -> "State" -> "Use anything (equal to, contains, etc)" -> Click "Search"
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31156 | https://github.com/apache/airflow/pull/31203 | d59b0533e18c7cf0ff17f8af50731d700a2e4b4d | 1133035f7912fb2d2612c7cee5017ebf01f8ec9d | "2023-05-09T14:40:44Z" | python | "2023-05-13T13:13:06Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,109 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | Add support for standard SQL in `BigQueryGetDataOperator` | ### Description
Currently, the BigQueryGetDataOperator always utilizes legacy SQL when submitting jobs (set as the default by the BQ API). This approach may cause problems when using standard SQL features, such as names for projects, datasets, or tables that include hyphens (which is very common nowadays). We would like to make it configurable, so users can set a flag in the operator to enable the use of standard SQL instead.
### Use case/motivation
When implementing #30887 to address #24460, I encountered some unusual errors, which were later determined to be related to the usage of hyphens in the GCP project ID name.
### Related issues
- #24460
- #28522 (PR) adds this parameter to `BigQueryCheckOperator`
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31109 | https://github.com/apache/airflow/pull/31190 | 24532312b694242ba74644fdd43a487e93122235 | d1fe67184da26fb0bca2416e26f321747fa4aa5d | "2023-05-06T14:04:34Z" | python | "2023-05-12T14:13:21Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,087 | ["Dockerfile.ci", "scripts/docker/entrypoint_ci.sh"] | Latest Botocore breaks SQS tests | ### Body
Our tests are broken in main due to latest botocore failing SQS tests.
Example here: https://github.com/apache/airflow/actions/runs/4887737387/jobs/8724954226
```
E botocore.exceptions.ClientError: An error occurred (400) when calling the SendMessage operation:
<ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><Error><Type>Sender</Type>
<Code>AWS.SimpleQueueService.NonExistentQueue</Code><Message>The specified queue does not exist for this wsdl version.</Message><Detail/></Error>
<RequestId>ETDUP0OoJOXmn0WS6yWmB0dOhgYtpdVJCVwFWA28lYLKLmGJLAGu</RequestId></ErrorResponse>
```
The problem seems to come from botocore not recognizing just added queue:
```
QUEUE_NAME = "test-queue"
QUEUE_URL = f"https://{QUEUE_NAME}"
```
Even if we replace it with the full queue name that gets returned by the "create_queue" API call to `moto`, it still does not work with latest botocore:
```
QUEUE_URL = f"https://sqs.us-east-1.amazonaws.com/123456789012/{QUEUE_NAME}"
```
Which indicates this likely a real botocore issue.
## How to reproduce:
1. Get working venv with `[amazon]` extra of Airlow (or breeze). Should be (when constraints from main are used):
```
root@b0c430d9a328:/opt/airflow# pip list | grep botocore
aiobotocore 2.5.0
botocore 1.29.76
```
3. `pip uninstall aiobotocore`
4. `pip install --upgrade botocore`
```
root@b0c430d9a328:/opt/airflow# pip list | grep botocore
botocore 1.29.127
```
6. `pytest tests/providers/amazon/aws/sensors/test_sqs.py`
Result:
```
===== 4 failed, 7 passed in 2.43s ===
```
----------------------------------
Comparing it to "success case":
When you run it breeze (with the current constrained botocore):
```
root@b0c430d9a328:/opt/airflow# pip list | grep botocore
aiobotocore 2.5.0
botocore 1.29.76
```
1, `pytest tests/providers/amazon/aws/sensors/test_sqs.py`
Result:
```
============ 11 passed in 4.57s =======
```
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/31087 | https://github.com/apache/airflow/pull/31103 | 41c87464428d8d31ba81444b3adf457bc968e11d | 49cc213919a7e2a5d4bdc9f952681fa4ef7bf923 | "2023-05-05T11:31:51Z" | python | "2023-05-05T20:32:51Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,084 | ["docs/docker-stack/build.rst", "docs/docker-stack/docker-examples/extending/add-airflow-configuration/Dockerfile"] | Changing configuration as part of the custom airflow docker image | ### What do you see as an issue?
https://airflow.apache.org/docs/docker-stack/build.html
This docs doesn't share information on how we can edit the airflow.cfg file for the airflow installed via docker. Adding this to docs, would give better idea about editing the configuration file
### Solving the problem
Add more details in build your docker image for editing the airflow.cfg file
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31084 | https://github.com/apache/airflow/pull/31842 | 7a786de96ed178ff99aef93761d82d100b29bdf3 | 9cc72bbaec0d7d6041ecd53541a524a2f1e523d0 | "2023-05-05T10:43:52Z" | python | "2023-06-11T18:12:28Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,080 | ["airflow/providers/common/sql/operators/sql.py", "airflow/providers/common/sql/provider.yaml", "airflow/providers/databricks/operators/databricks_sql.py", "airflow/providers/databricks/provider.yaml", "generated/provider_dependencies.json", "tests/providers/databricks/operators/test_databricks_sql.py"] | SQLExecuteQueryOperator AttributeError exception when returning result to XCom | ### Apache Airflow version
2.6.0
### What happened
I am using DatabricksSqlOperator which writes the result to a file. When the task finishes it writes all the data correctly to the file the throws the following exception:
> [2023-05-05, 07:56:22 UTC] {taskinstance.py:1847} ERROR - Task failed with exception
> Traceback (most recent call last):
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper
> return func(*args, **kwargs)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2377, in xcom_push
> XCom.set(
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 73, in wrapper
> return func(*args, **kwargs)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/xcom.py", line 237, in set
> value = cls.serialize_value(
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/models/xcom.py", line 632, in serialize_value
> return json.dumps(value, cls=XComEncoder).encode("UTF-8")
> File "/usr/local/lib/python3.9/json/__init__.py", line 234, in dumps
> return cls(
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/json.py", line 102, in encode
> o = self.default(o)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/json.py", line 91, in default
> return serialize(o)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 144, in serialize
> return encode(classname, version, serialize(data, depth + 1))
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in serialize
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in serialize
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 123, in <listcomp>
> return [serialize(d, depth + 1) for d in o]
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/serialization/serde.py", line 132, in serialize
> qn = qualname(o)
> File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 47, in qualname
> return f"{o.__module__}.{o.__name__}"
> File "/home/airflow/.local/lib/python3.9/site-packages/databricks/sql/types.py", line 161, in __getattr__
> raise AttributeError(item)
> AttributeError: __name__
I found that **SQLExecuteQueryOperator** always return the result(so pushing XCom) from its execute() method except when the parameter **do_xcom_push** is set to **False**. But if do_xcom_push is False then the method _process_output() is not executed and DatabricksSqlOperator wont write the results to a file.
### What you think should happen instead
I am not sure if the problem should be fixed in DatabricksSqlOperator or in SQLExecuteQueryOperator. In any case setting do_xcom_push shouldn't automatically prevent the exevution of _process_output():
```
if not self.do_xcom_push:
return None
if return_single_query_results(self.sql, self.return_last, self.split_statements):
# For simplicity, we pass always list as input to _process_output, regardless if
# single query results are going to be returned, and we return the first element
# of the list in this case from the (always) list returned by _process_output
return self._process_output([output], hook.descriptions)[-1]
return self._process_output(output, hook.descriptions)
```
What happens now is - i have in the same time big result in a file AND in the XCom.
### How to reproduce
I suspect that the actual Exception is related to writing the XCom to the meta database and it might not fail on other scenarios.
### Operating System
Debian GNU/Linux 11 (bullseye) docker image
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==8.0.0
apache-airflow-providers-apache-spark==4.0.1
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-cncf-kubernetes==6.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-databricks==4.1.0
apache-airflow-providers-docker==3.6.0
apache-airflow-providers-elasticsearch==4.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==10.0.0
apache-airflow-providers-grpc==3.1.0
apache-airflow-providers-hashicorp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==6.0.0
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-odbc==3.2.1
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-redis==3.1.0
apache-airflow-providers-samba==4.1.0
apache-airflow-providers-sendgrid==3.1.0
apache-airflow-providers-sftp==4.2.4
apache-airflow-providers-slack==7.2.0
apache-airflow-providers-snowflake==4.0.5
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.6.0
apache-airflow-providers-telegram==4.0.0
### Deployment
Docker-Compose
### Deployment details
Using extended Airflow image, LocalExecutor, Postgres 13 meta db as container in the same stack.
docker-compose version 1.29.2, build 5becea4c
Docker version 23.0.5, build bc4487a
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31080 | https://github.com/apache/airflow/pull/31136 | 521dae534dd0b906e4dd9a7446c6bec3f9022ac3 | edd7133a1336c9553d77ba13c83bc7f48d4c63f0 | "2023-05-05T08:16:58Z" | python | "2023-05-09T11:11:41Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,067 | ["setup.py"] | [BUG] apache.hive extra is referencing incorrect provider name | ### Apache Airflow version
2.6.0
### What happened
When creating docker image with airflow 2.6.0 I receive the error: `ERROR: No matching distribution found for apache-airflow-providers-hive>=5.1.0; extra == "apache.hive"`
After which, I see that the package name should be `apache-airflow-providers-apache-hive` and not `apache-airflow-providers-hive`.
### What you think should happen instead
We should change this line to say `apache-airflow-providers-apache-hive` and not `apache-airflow-providers-hive`, this will reference a provider which exists.
### How to reproduce
Build image for airflow 2.6.0 with the dependency `apache.hive`. Such as `pip3 install apache-airflow[apache.hive]==2.6.0 --constraint https://raw.githubusercontent.com/apache/airflow/constraints-2.6.0/constraints-3.8.txt`.
### Operating System
ubuntu:22.04
### Versions of Apache Airflow Providers
Image does not build.
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31067 | https://github.com/apache/airflow/pull/31068 | da61bc101eba0cdb17554f5b9ae44998bb0780d3 | 9e43d4aee3b86134b1b9a42f988fb9d3975dbaf7 | "2023-05-04T17:37:49Z" | python | "2023-05-05T15:39:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,059 | ["airflow/utils/log/file_task_handler.py", "tests/providers/amazon/aws/log/test_s3_task_handler.py", "tests/utils/test_log_handlers.py"] | Logs no longer shown after task completed CeleryExecutor | ### Apache Airflow version
2.6.0
### What happened
Stream logging works as long as the task is running. Once the task finishes, no logs are printed to the UI (only the hostname of the worker is printed)
<img width="1657" alt="image" src="https://user-images.githubusercontent.com/16529101/236212701-aecf6cdc-4d87-4817-a685-0778b94d182b.png">
### What you think should happen instead
Expected to see the complete log of a task
### How to reproduce
Start an airflow task. You should be able to see the logs coming in as a stream, once it finishes, the logs are gone
### Operating System
CentOS 7
### Versions of Apache Airflow Providers
airflow-provider-great-expectations==0.1.5
apache-airflow==2.6.0
apache-airflow-providers-airbyte==3.1.0
apache-airflow-providers-apache-hive==4.0.1
apache-airflow-providers-apache-spark==3.0.0
apache-airflow-providers-celery==3.1.0
apache-airflow-providers-common-sql==1.4.0
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-mysql==5.0.0
apache-airflow-providers-oracle==3.4.0
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.2
### Deployment
Virtualenv installation
### Deployment details
Celery with 4 workers nodes/VMs. Scheduler and Webserver on a different VM
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31059 | https://github.com/apache/airflow/pull/31101 | 10dda55e8b0fed72e725b369c17cb5dfb0d77409 | 672ee7f0e175dd7edb041218850d0cd556d62106 | "2023-05-04T13:07:47Z" | python | "2023-05-08T21:51:35Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,027 | ["airflow/config_templates/default_celery.py"] | Airflow doesn't recognize `rediss:...` url to point to a Redis broker | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### What happened
Airflow 2.5.3
Redis is attached using `rediss:...` url. While deploying the instance, it Airflow/Celery downgrades `rediss` to `redis` with the warning `[2023-05-02 18:38:30,377: WARNING/MainProcess] Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour.`
Adding `AIRFLOW__CELERY__SSL_ACTIVE=True` as an environmental variable (the same as `ssl_active = true` in `airflow.cfg` file `[celery]` section) fails with the error
`airflow.exceptions.AirflowException: The broker you configured does not support SSL_ACTIVE to be True. Please use RabbitMQ or Redis if you would like to use SSL for broker.`
<img width="1705" alt="Screenshot 2023-05-12 at 12 07 45 PM" src="https://github.com/apache/airflow/assets/94494788/b56cf054-d122-4baf-b6e9-75effe804731">
### What you think should happen instead
It seems that Airflow doesn't recognize `rediss:...` url to be related to Redis broker
### How to reproduce
Airflow 2.5.3
Python 3.10.9
Redis 4.0.14 (url starts with `rediss:...`)

You need to add `AIRFLOW__CELERY__SSL_ACTIVE=True` as an environmental variable or `ssl_active = true` to `airflow.cfg` file `[celery]` section and deploy the instance

### Operating System
Ubuntu 22.04.2 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Heroku platform, heroku-22 stack, python 3.10.9
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31027 | https://github.com/apache/airflow/pull/31028 | d91861d3bdbde18c937978c878d137d6c758e2c6 | 471fdacd853a5bcb190e1ffc017a4e650097ed69 | "2023-05-02T20:10:11Z" | python | "2023-06-07T17:09:46Z" |
closed | apache/airflow | https://github.com/apache/airflow | 31,025 | ["airflow/www/static/js/dag/details/graph/Node.tsx", "airflow/www/static/js/dag/details/graph/utils.ts", "airflow/www/static/js/utils/graph.ts"] | New graph view renders incorrectly when prefix_group_id=false | ### Apache Airflow version
2.6.0
### What happened
If a task_group in a dag has `prefix_group_id=false` in its config, the new graph won't render correctly. When the group is collapsed nothing is shown and there is an error in the console. When the group is expanded, the nodes will render but its edges become disconnected. As reported in https://github.com/apache/airflow/issues/29852#issuecomment-1531766479
This is because we use the prefix to determine where an edge is supposed to be rendered. We shouldn't make that assumption and actually iterate through the nodes to find where an edge belongs.
### What you think should happen instead
It renders like any other task group
### How to reproduce
Add `prefix_group_id=false` to a task group
### Operating System
any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/31025 | https://github.com/apache/airflow/pull/32764 | 53c6305bd0a914738074821d5f5f233e3ed5bee5 | 3e467ba510d29e912d89115769726111b8bce891 | "2023-05-02T18:15:05Z" | python | "2023-07-22T10:23:12Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,984 | ["airflow/models/dagrun.py", "airflow/models/taskinstance.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py"] | Unable to remove DagRun and TaskInstance with note | ### Apache Airflow version
2.6.0
### What happened
Hi, I'm unable to remove DagRun and TaskInstance when they have note attached.
### What you think should happen instead
Should be able to remove DagRuns or TaskInstances with or without notes.
Also note should be removed when parent entity is removed.
### How to reproduce
1. Create note in DagRun or TaskInstance
2. Try to remove the row that note has been added by clicking delete record icon. This will display alert in the UI `General Error <class 'AssertionError'>`
3. Select checkbox DagRun containing note, click `Actions` dropdown and select `Delete`. This won't display anything in the UI.
### Operating System
OSX 12.6
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==5.2.2
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-postgres==5.4.0
apache-airflow-providers-sqlite==3.3.1
### Deployment
Virtualenv installation
### Deployment details
Deployed using Postgresql 13.9 and sqlite 3
### Anything else
DagRun deletion Log
```
[2023-05-01T13:06:42.125+0700] {interface.py:790} ERROR - Delete record error: Dependency rule tried to blank-out primary key column 'dag_run_note.dag_run_id' on instance '<DagRunNote at 0x1125afa00>'
Traceback (most recent call last):
File "/opt/airflow/.venv/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 775, in delete
self.session.commit()
File "<string>", line 2, in commit
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
self._transaction.commit(_to_root=self.future)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3446, in flush
self._flush(objects)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3585, in _flush
with util.safe_reraise():
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3546, in _flush
flush_context.execute()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
self.dependency_processor.process_deletes(uow, states)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
self._synchronize(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
raise AssertionError(
AssertionError: Dependency rule tried to blank-out primary key column 'dag_run_note.dag_run_id' on instance '<DagRunNote at 0x1125afa00>'
```
TaskInstance deletion Log
```
[2023-05-01T13:06:42.125+0700] {interface.py:790} ERROR - Delete record error: Dependency rule tried to blank-out primary key column 'task_instance_note.task_id' on instance '<TaskInstanceNote at 0x1126ba770>'
Traceback (most recent call last):
File "/opt/airflow/.venv/lib/python3.10/site-packages/flask_appbuilder/models/sqla/interface.py", line 775, in delete
self.session.commit()
File "<string>", line 2, in commit
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
self._transaction.commit(_to_root=self.future)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
self._prepare_impl()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
self.session.flush()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3446, in flush
self._flush(objects)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3585, in _flush
with util.safe_reraise():
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3546, in _flush
flush_context.execute()
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
rec.execute(self)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 577, in execute
self.dependency_processor.process_deletes(uow, states)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 552, in process_deletes
self._synchronize(
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/dependency.py", line 610, in _synchronize
sync.clear(dest, self.mapper, self.prop.synchronize_pairs)
File "/opt/airflow/.venv/lib/python3.10/site-packages/sqlalchemy/orm/sync.py", line 86, in clear
raise AssertionError(
AssertionError: Dependency rule tried to blank-out primary key column 'task_instance_note.task_id' on instance '<TaskInstanceNote at 0x1126ba770>'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30984 | https://github.com/apache/airflow/pull/30987 | ec7674f111177c41c02e5269ad336253ed9c28b4 | 0212b7c14c4ce6866d5da1ba9f25d3ecc5c2188f | "2023-05-01T06:29:36Z" | python | "2023-05-01T21:14:04Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,932 | ["airflow/models/baseoperator.py", "tests/models/test_mappedoperator.py"] | Tasks created using "dynamic task mapping" ignore the Task Group passed as argument | ### Apache Airflow version
main (development)
### What happened
When creating a DAG with Task Groups and a Mapped Operator, if the Task Group is passed as argument to Mapped Operator's `partial` method it is ignored and the operator is not added to the group.
### What you think should happen instead
The Mapped Operator should be added to the Task Group passed as an argument.
### How to reproduce
Create a DAG with a source code like the following one
```python
from airflow import DAG
from airflow.operators.bash import BashOperator
from airflow.operators.empty import EmptyOperator
from airflow.utils import timezone
from airflow.utils.task_group import TaskGroup
with DAG("dag", start_date=timezone.datetime(2016, 1, 1)) as dag:
start = EmptyOperator(task_id="start")
finish = EmptyOperator(task_id="finish")
group = TaskGroup("test-group")
commands = ["echo a", "echo b", "echo c"]
mapped = BashOperator.partial(task_id="task_2", task_group=group).expand(bash_command=commands)
start >> group >> finish
# assert mapped.task_group == group
```
### Operating System
macOS 13.2.1 (22D68)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30932 | https://github.com/apache/airflow/pull/30933 | 1d4b1410b027c667d4e2f51f488f98b166facf71 | 4ee2de1e38a85abb89f9f313a3424c7368e12d1a | "2023-04-27T23:34:38Z" | python | "2023-04-29T21:27:22Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,900 | ["airflow/api_connexion/endpoints/dag_endpoint.py", "tests/api_connexion/endpoints/test_dag_endpoint.py"] | REST API, order_by parameter in dags list is not taken into account | ### Apache Airflow version
2.5.3
### What happened
It seems that the order_by parameters is not used when calling dags list with the rest api
The following two commands returns the same results which should not be possible cause one is ascending and the other descending
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=dag_id&only_active=true' -H 'accept: application/json'
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=-dag_id&only_active=true' -H 'accept: application/json'
by the way, giving an incorrect field name doesn't throw an error
### What you think should happen instead
_No response_
### How to reproduce
The following two commands returns the same results
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=-dag_id&only_active=true' -H 'accept: application/json'
curl -X 'GET' 'http://<server_name>:<port>/api/v1/dags?limit=100&order_by=dag_id&only_active=true' -H 'accept: application/json'
Same problem is visible with the swagger ui
### Operating System
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/"
### Versions of Apache Airflow Providers
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-http==4.2.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-mssql==3.3.2
apache-airflow-providers-mysql==4.0.2
apache-airflow-providers-oracle==3.6.0
apache-airflow-providers-sqlite==3.3.1
apache-airflow-providers-vertica==3.3.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30900 | https://github.com/apache/airflow/pull/30926 | 36fe6d0377d37b5f6be8ea5659dcabb44b4fc233 | 1d4b1410b027c667d4e2f51f488f98b166facf71 | "2023-04-27T10:10:57Z" | python | "2023-04-29T16:07:01Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,884 | ["airflow/jobs/dag_processor_job_runner.py"] | DagProcessor Performance Regression | ### Apache Airflow version
2.5.3
### What happened
Upgrading from `2.4.3` to `2.5.3` caused a significant increase in dag processing time on standalone dag processor (~1-2s to 60s):
```
/opt/airflow/dags/ecco_airflow/dags/image_processing/product_image_load.py 0 -1 56.68s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/known_consumers/known_consumers.py 0 -1 56.64s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/monitoring/row_counts.py 0 -1 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/base.py 0 -1 56.66s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_data.py 0 -1 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_stream.py 0 -1 56.52s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/reporting/reporting_data_foundation.py 0 -1 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/retail_analysis/retail_analysis_dbt.py 0 -1 56.66s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/rfm_segments/rfm_segments.py 0 -1 56.02s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/utils/airflow.py 0 -1 56.65s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/aad_users_listing.py 1 0 55.51s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/funnel_io.py 1 0 56.13s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/iar_param.py 1 0 56.50s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/sfmc_copy.py 1 0 56.59s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/bronze/us_legacy_datawarehouse.py 1 0 55.15s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_auditing.py 1 0 56.54s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_budget_daily_phasing.py 1 0 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_gold_rm_tests.py 1 0 55.00s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/consumer_entity_matching/graph_entity_matching.py 1 0 56.67s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/data_backup/data_backup.py 1 0 56.69s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/hive/adhoc_entity_publish.py 1 0 55.33s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/image_regression/train.py 1 0 56.63s 2023-04-26T12:56:15
/opt/airflow/dags/ecco_airflow/dags/maintenance/db_maintenance.py 1 0 56.58s 2023-04-26T12:56:15
```
Also seeing messages like these
```
[2023-04-26T12:56:15.322+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/us_legacy_datawarehouse.py finished
[2023-04-26T12:56:15.323+0000] {processor.py:296} DEBUG - Waiting for <ForkProcess name='DagFileProcessor68-Process' pid=116 parent=7 stopped exitcode=0>
[2023-04-26T12:56:15.323+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/cdp/ecco_cdp_gold_rm_tests.py finished
[2023-04-26T12:56:15.323+0000] {processor.py:296} DEBUG - Waiting for <ForkProcess name='DagFileProcessor69-Process' pid=122 parent=7 stopped exitcode=0>
[2023-04-26T12:56:15.324+0000] {manager.py:979} DEBUG - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/streaming/sap_inventory_feed.py finished
[2023-04-26T12:56:15.324+0000] {processor.py:314} DEBUG - Waiting for <ForkProcess name='DagFileProcessor70-Process' pid=128 parent=7 stopped exitcode=-SIGKILL>
[2023-04-26T12:56:15.324+0000] {manager.py:986} ERROR - Processor for /opt/airflow/dags/ecco_airflow/dags/bronze/streaming/sap_inventory_feed.py exited with return code -9.
```
In `2.4.3`:
```
/opt/airflow/dags/ecco_airflow/dags/image_regression/train.py 1 0 1.34s 2023-04-26T14:19:08
/opt/airflow/dags/ecco_airflow/dags/known_consumers/known_consumers.py 1 0 1.12s 2023-04-26T14:19:00
/opt/airflow/dags/ecco_airflow/dags/maintenance/db_maintenance.py 1 0 0.63s 2023-04-26T14:18:27
/opt/airflow/dags/ecco_airflow/dags/monitoring/row_counts.py 1 0 3.74s 2023-04-26T14:18:45
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_data.py 1 0 1.21s 2023-04-26T14:18:47
/opt/airflow/dags/ecco_airflow/dags/omnichannel/oc_stream.py 1 0 1.22s 2023-04-26T14:18:30
/opt/airflow/dags/ecco_airflow/dags/reporting/reporting_data_foundation.py 1 0 1.39s 2023-04-26T14:19:08
/opt/airflow/dags/ecco_airflow/dags/retail_analysis/retail_analysis_dbt.py 1 0 1.32s 2023-04-26T14:18:51
/opt/airflow/dags/ecco_airflow/dags/rfm_segments/rfm_segments.py 1 0 1.20s 2023-04-26T14:18:34
```
### What you think should happen instead
Dag processing time remains unchanged
### How to reproduce
Provision Airflow with the following settings:
## Airflow 2.5.3
- K8s 1.25.6
- Kubernetes executor
- Postgres backend (Postgres 11.0)
- Deploy using Airflow Helm **v1.9.0** with image **2.5.3-python3.9**
- pgbouncer enabled
- standalone dag processort with 3500m cpu / 4000Mi memory, single replica
- dags and logs mounted from RWM volume (Azure files)
## Airflow 2.4.3
- K8s 1.25.6
- Kubernetes executor
- Postgres backend (Postgres 11.0)
- Deploy using Airflow Helm **v1.7.0** with image **2.4.3-python3.9**
- pgbouncer enabled
- standalone dag processort with 2500m cpu / 2000Mi memory, single replica
- dags and logs mounted from RWM volume (Azure files)
## Image modifications
We use image built from `apache/airflow:2.4.3-python3.9`, with some dependencies added/reinstalled with different versions.
### Poetry dependency spec:
For `2.5.3`:
```
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
authlib = "~1.0.1"
adapta = { version = "==2.2.3", extras = ["azure", "storage"] }
numpy = "==1.23.3"
db-dtypes = "~1.0.4"
gevent = "^21.12.0"
sqlalchemy = ">=1.4,<2.0"
snowflake-sqlalchemy = ">=1.4,<2.0"
esd-services-api-client = "~0.6.0"
apache-airflow-providers-common-sql = "~1.3.1"
apache-airflow-providers-databricks = "~3.1.0"
apache-airflow-providers-google = "==8.4.0"
apache-airflow-providers-microsoft-azure = "~5.2.1"
apache-airflow-providers-datadog = "~3.0.0"
apache-airflow-providers-snowflake = "~3.3.0"
apache-airflow = "==2.5.3"
dataclasses-json = ">=0.5.7,<0.6"
```
For `2.4.3`:
```
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
authlib = "~1.0.1"
adapta = { version = "==2.2.3", extras = ["azure", "storage"] }
numpy = "==1.23.3"
db-dtypes = "~1.0.4"
gevent = "^21.12.0"
sqlalchemy = ">=1.4,<2.0"
snowflake-sqlalchemy = ">=1.4,<2.0"
esd-services-api-client = "~0.6.0"
apache-airflow-providers-common-sql = "~1.3.1"
apache-airflow-providers-databricks = "~3.1.0"
apache-airflow-providers-google = "==8.4.0"
apache-airflow-providers-microsoft-azure = "~5.2.1"
apache-airflow-providers-datadog = "~3.0.0"
apache-airflow-providers-snowflake = "~3.3.0"
apache-airflow = "==2.4.3"
dataclasses-json = ">=0.5.7,<0.6"
```
### Operating System
Container OS: Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==6.0.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.4.0
apache-airflow-providers-common-sql==1.3.4
apache-airflow-providers-databricks==3.1.0
apache-airflow-providers-datadog==3.0.0
apache-airflow-providers-docker==3.2.0
apache-airflow-providers-elasticsearch==4.2.1
apache-airflow-providers-ftp==3.3.1
apache-airflow-providers-google==8.4.0
apache-airflow-providers-grpc==3.0.0
apache-airflow-providers-hashicorp==3.1.0
apache-airflow-providers-http==4.3.0
apache-airflow-providers-imap==3.1.1
apache-airflow-providers-microsoft-azure==5.2.1
apache-airflow-providers-mysql==3.2.1
apache-airflow-providers-odbc==3.1.2
apache-airflow-providers-postgres==5.2.2
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-sendgrid==3.0.0
apache-airflow-providers-sftp==4.1.0
apache-airflow-providers-slack==6.0.0
apache-airflow-providers-snowflake==3.3.0
apache-airflow-providers-sqlite==3.3.2
apache-airflow-providers-ssh==3.2.0
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
See How-to-reproduce section
### Anything else
Occurs by upgrading the helm chart from 1.7.0/2.4.3 to 1.9.0/2.5.3 installation.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30884 | https://github.com/apache/airflow/pull/30899 | 7ddad1a24b1664cef3827b06d9c71adbc558e9ef | 00ab45ffb7dee92030782f0d1496d95b593fd4a7 | "2023-04-26T14:47:31Z" | python | "2023-04-27T11:27:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,838 | ["airflow/www/templates/airflow/dags.html", "airflow/www/views.py"] | Sort Dag List by Last Run Date | ### Description
It would be helpful to me if I could see the most recently ran DAGs and their health in the Airflow UI. Right now many fields are sortable but not last run.
The solution here would likely build off the previous work from this issue: https://github.com/apache/airflow/issues/8459
### Use case/motivation
When my team updates a docker image we want to confirm our DAGs are still running healthy. One way to do that would be to pop open the Airflow UI and look at our teams DAGs (using the label tag) and confirm the most recently ran jobs are still healthy.
### Related issues
I think it would build off of https://github.com/apache/airflow/issues/8459
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30838 | https://github.com/apache/airflow/pull/31234 | 7ebda3898db2eee72d043a9565a674dea72cd8fa | 3363004450355582712272924fac551dc1f7bd56 | "2023-04-24T13:41:07Z" | python | "2023-05-17T15:11:15Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,796 | ["docs/apache-airflow/authoring-and-scheduling/plugins.rst"] | Tasks forked by the Local Executor are loading stale modules when the modules are also referenced by plugins | ### Apache Airflow version
2.5.3
### What happened
After upgrading from Airflow 2.4.3 to 2.5.3, tasks forked by the `Local Executor` can run with outdated module imports if those modules are also imported by plugins. It seems as though tasks will reuse imports that were first loaded when the scheduler boots, and any subsequent updates to those shared modules do not get reflected in new tasks.
I verified this issue occurs for all patch versions of 2.5.
### What you think should happen instead
Given that the plugin documentation states:
> if you make any changes to plugins and you want the webserver or scheduler to use that new code you will need to restart those processes.
this behavior may be attended. But it's not clear that this affects the code for forked tasks as well. So if this is not actually a bug then perhaps the documentation can be updated.
### How to reproduce
Given a plugin file like:
```python
from airflow.models.baseoperator import BaseOperatorLink
from src.custom_operator import CustomOperator
class CustomerOperatorLink(BaseOperatorLink):
operators = [CustomOperator]
```
And a dag file like
```
from src.custom_operator import CustomOperator
...
```
Any updates to the `CustomOperator` will not be reflected in new running tasks after the scheduler boots.
### Operating System
Debian bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
Workarounds
- Set `execute_tasks_new_python_interpreter` to `False`
- In my case of using Operator Links, I can alternatively set the Operator Link in my custom operator using `operator_extra_links`, which wouldn't require importing the operator from the plugin file.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30796 | https://github.com/apache/airflow/pull/31781 | ab8c9ec2545caefb232d8e979b18b4c8c8ad3563 | 18f2b35c8fe09aaa8d2b28065846d7cf1e85cae2 | "2023-04-21T15:35:10Z" | python | "2023-06-08T18:50:58Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,613 | ["airflow/providers/amazon/aws/hooks/base_aws.py", "tests/providers/amazon/aws/hooks/test_dynamodb.py"] | DynamoDBHook - not able to registering a custom waiter | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon=7.4.1
### Apache Airflow version
airflow=2.5.3
### Operating System
Mac
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
We can register a custom waiter by adding a JSON file to the path - `airflow/airflow/providers/amazon/aws/waiters/`. The should be named `<client_type>.json` in this case - `dynamodb.json`. Once registered we can use the custom waiter.
content of the file - `airflow/airflow/providers/amazon/aws/waiters/dynamodb.json`:
```
{
"version": 2,
"waiters": {
"export_table": {
"operation": "ExportTableToPointInTime",
"delay": 30,
"maxAttempts": 60,
"acceptors": [
{
"matcher": "path",
"expected": "COMPLETED",
"argument": "ExportDescription.ExportStatus",
"state": "success"
},
{
"matcher": "path",
"expected": "FAILED",
"argument": "ExportDescription.ExportStatus",
"state": "failure"
},
{
"matcher": "path",
"expected": "IN_PROGRESS",
"argument": "ExportDescription.ExportStatus",
"state": "retry"
}
]
}
}
}
```
Getting below error post running test case:
```
class TestCustomDynamoDBServiceWaiters:
"""Test waiters from ``amazon/aws/waiters/dynamodb.json``."""
STATUS_COMPLETED = "COMPLETED"
STATUS_FAILED = "FAILED"
STATUS_IN_PROGRESS = "IN_PROGRESS"
@pytest.fixture(autouse=True)
def setup_test_cases(self, monkeypatch):
self.client = boto3.client("dynamodb", region_name="eu-west-3")
monkeypatch.setattr(DynamoDBHook, "conn", self.client)
@pytest.fixture
def mock_export_table_to_point_in_time(self):
"""Mock ``DynamoDBHook.Client.export_table_to_point_in_time`` method."""
with mock.patch.object(self.client, "export_table_to_point_in_time") as m:
yield m
def test_service_waiters(self):
assert os.path.exists('/Users/utkarsharma/sandbox/airflow-sandbox/airflow/airflow/providers/amazon/aws/waiters/dynamodb.json')
hook_waiters = DynamoDBHook(aws_conn_id=None).list_waiters()
assert "export_table" in hook_waiters
```
## Error
tests/providers/amazon/aws/waiters/test_custom_waiters.py:273 (TestCustomDynamoDBServiceWaiters.test_service_waiters)
'export_table' != ['table_exists', 'table_not_exists']
Expected :['table_exists', 'table_not_exists']
Actual :'export_table'
<Click to see difference>
self = <tests.providers.amazon.aws.waiters.test_custom_waiters.TestCustomDynamoDBServiceWaiters object at 0x117f085e0>
def test_service_waiters(self):
assert os.path.exists('/Users/utkarsharma/sandbox/airflow-sandbox/airflow/airflow/providers/amazon/aws/waiters/dynamodb.json')
hook_waiters = DynamoDBHook(aws_conn_id=None).list_waiters()
> assert "export_table" in hook_waiters
E AssertionError: assert 'export_table' in ['table_exists', 'table_not_exists']
test_custom_waiters.py:277: AssertionError
### What you think should happen instead
It should register the custom waiter and test case should pass.the
### How to reproduce
Add the file mentioned above to Airflow's code base and try running the test case provided.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30613 | https://github.com/apache/airflow/pull/30595 | cb5a2c56b99685305eecdd3222b982a1ef668019 | 7c2d3617bf1be0781e828d3758ee6d9c6490d0f0 | "2023-04-13T04:27:21Z" | python | "2023-04-14T16:43:59Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,593 | ["airflow/jobs/dag_processor_job_runner.py"] | After upgrading to 2.5.3, Dag Processing time increased dramatically. | ### Apache Airflow version
2.5.3
### What happened
I upgraded my airflow cluster from 2.5.2 to 2.5.3 , after which strange things started happening.
I'm currently using a standalone dagProcessor, and the parsing time that used to take about 2 seconds has suddenly increased to about 10 seconds.
I'm thinking it's weird because I haven't made any changes other than a version up, but is there something I can look into? Thanks in advance! 🙇🏼

### What you think should happen instead
I believe that the time it takes to parse a Dag should be constant, or at least have some variability, but shouldn't take as long as it does now.
### How to reproduce
If you cherrypick [this commit](https://github.com/apache/airflow/pull/30079) into 2.5.2 stable code, the issue will recur.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
- Kubernetes 1.21 Cluster
- 1.7.0 helm chart
- standalone dag processor
- using kubernetes executor
- using mysql database
### Anything else
This issue still persists, and restarting the Dag Processor has not resolved the issue.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30593 | https://github.com/apache/airflow/pull/30899 | 7ddad1a24b1664cef3827b06d9c71adbc558e9ef | 00ab45ffb7dee92030782f0d1496d95b593fd4a7 | "2023-04-12T01:28:37Z" | python | "2023-04-27T11:27:33Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,562 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/utils/db.py", "tests/utils/test_db.py"] | alembic Logging | ### Apache Airflow version
2.5.3
### What happened
When I call the airflow initdb function, it outputs these lines to the log
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
### What you think should happen instead
There should be a mechanism to disable these logs, or they should just be set to WARN by default
### How to reproduce
Set up a new postgres connection and call:
from airflow.utils.db import initdb
initdb()
### Operating System
MacOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30562 | https://github.com/apache/airflow/pull/31415 | c5597d1fabe5d8f3a170885f6640344d93bf64bf | e470d784627502f171819fab072e0bbab4a05492 | "2023-04-10T11:25:58Z" | python | "2023-05-23T01:33:31Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,414 | ["airflow/www/views.py", "tests/www/views/test_views_tasks.py"] | Cannot clear tasking instances on "List Task Instance" page with User role | ### Apache Airflow version
main (development)
### What happened
Only users with the role `Admin` are allowed to use the action clear on the TaskInstance list view.
### What you think should happen instead
Users with role `User` should be able to clear task instance in the Task Instance page.
### How to reproduce
Try to clear Task instance while using a user with a `User` role.
### Operating System
Fedora 37
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30414 | https://github.com/apache/airflow/pull/30415 | 22bef613678e003dde9128ac05e6c45ce934a50c | b140c4473335e4e157ff2db85148dd120c0ed893 | "2023-04-01T11:20:33Z" | python | "2023-04-22T17:10:49Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,407 | [".github/workflows/ci.yml", "BREEZE.rst", "dev/breeze/src/airflow_breeze/commands/testing_commands.py", "dev/breeze/src/airflow_breeze/commands/testing_commands_config.py", "dev/breeze/src/airflow_breeze/utils/selective_checks.py", "images/breeze/output-commands-hash.txt", "images/breeze/output_testing_tests.svg"] | merge breeze's --test-type and --test-types options | ### Description
using `breeze testing tests` recently I noticed that the way to specify which tests to run is very confusing:
* `--test-type` supports specifying one type only (or `All`), allows specifying which provider tests to run in details, and is ignored if `--run-in-parallel` is provided (from what I saw)
* `--test-types` (note the `s` at the end) supports a list of types, does not allow to select specific provider tests, and is ignored if `--run-in-parallel` is NOT specified.
I _think_ that the two are mutually exclusive (i.e. there is no situation where one is taken into account and the other isn’t ignored), so it’d make sense to merge them.
Definition of Done:
- --test-type or --test-types can be used interchangeably, whether the tests are running in parallel or not (it'd be a bit like how `kubectl` allows using singular or plural for some actions, like `k get pod` == `k get pods`)
- When using the type `Providers`, specific provider tests can be selected between square brackets using the current syntax (e.g. `Providers[airbyte,http]`)
- several types can be specified, separated by a space (e.g. `"WWW CLI"`)
- the two bullet points above can be combined (e.g. `--test-type "Always Providers[airbyte,http] WWW"`)
### Use case/motivation
having a different behavior for a very similar option depending on whether we are running in parallel or not is confusing, and from a user perspective, there is no benefit to having those as separate options.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30407 | https://github.com/apache/airflow/pull/30424 | 90ba6fe070d903bca327b52b2f61468408d0d96a | 20606438c27337c20aa9aff8397dfa6f286f03d3 | "2023-03-31T22:12:56Z" | python | "2023-04-04T11:30:23Z" |
closed | apache/airflow | https://github.com/apache/airflow | 30,365 | ["airflow/cli/cli_config.py", "airflow/cli/commands/dag_command.py", "tests/cli/commands/test_dag_command.py"] | Need an REST API or/and Airflow CLI to fetch last parsed time of a given DAG | ### Description
We need to access the time at which a given DAG was parsed last.
Airflow Version : 2.2.2 and above.
### Use case/motivation
End users want to run a given DAG post applying the changes they have done on them. This would mean that the DAG should be parsed post the edits done to it. Right now the last parsed time is available by accessing the airflow database only. Querying the database directly is not the best solution to the problem. Ideally airflow should be exposing APIs that end users can consume that can help provide the last parsed time for a given DAG.
### Related issues
Not Aware.
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/30365 | https://github.com/apache/airflow/pull/30432 | c5b685e88dd6ecf56d96ef4fefa6c409f28e2b22 | 7074167d71c93b69361d24c1121adc7419367f2a | "2023-03-30T08:34:47Z" | python | "2023-04-14T17:14:48Z" |
Subsets and Splits