repo_name
stringclasses
28 values
pr_number
int64
1.86k
122k
pr_title
stringlengths
5
204
author
stringlengths
3
58
git_commit_prev
stringlengths
40
40
git_commit_curr
stringlengths
40
40
date_created
stringlengths
25
25
date_merged
stringlengths
25
25
query
stringlengths
12
65.6k
context_file_path
stringlengths
6
233
label
int64
-1
1
language
stringclasses
5 values
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/optimizers/optimizer.py
1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/tensorflow/optimizer.py
1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/optimizers/optimizer_test.py
1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/torch/optimizers/torch_parallel_optimizer.py
1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/optimizers/nadam.py
1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/reshaping/cropping1d.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/regularization/alpha_dropout_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/reshaping/repeat_vector_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/datasets/cifar10.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/preprocessing/hashing_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/trainers/data_adapters/generator_data_adapter.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./guides/distributed_training_with_jax.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/tensorflow/vision/cutmix.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/random/__init__.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/tensorflow/keras_recipes/tensorflow_numpy_models.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/demo_functional.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/datasets/__init__.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/tensorflow/vision/bit.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/utils/naming_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/merging/maximum.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/losses/losses_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/mixed_precision/dtype_policy.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/utils/dataset_utils_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/torch/optimizers/torch_adadelta.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/datasets/imdb.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/common/compute_output_spec_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/preprocessing/random_contrast_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/trainers/data_adapters/data_adapter.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/preprocessing/random_translation.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/common/stateless_scope_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/tensorflow/image.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./guides/making_new_layers_and_models_via_subclassing.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/timeseries/timeseries_anomaly_detection.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/callbacks/reduce_lr_on_plateau.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/optimizers/rmsprop_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/tests/device_scope_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/metrics/reduction_metrics_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/initializers/random_initializers.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/ops/operation_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/tensorflow/nlp/text_classification_from_scratch.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/vision/attention_mil_classification.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/utils/progbar.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/rnn/conv_lstm3d.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/preprocessing/random_rotation.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/pooling/average_pooling1d.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/tensorflow/nlp/ner_transformers.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/trainers/compile_utils.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/trainers/data_adapters/py_dataset_adapter.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/vision/video_transformers.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/legacy/preprocessing/sequence.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/convolutional/__init__.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/utils/io_utils_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/utils/tracking.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/pooling/global_max_pooling2d.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/vision/learnable_resizer.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/core/wrapper_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/legacy/backend.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/tensorflow/vision/perceiver_image_classification.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/reshaping/repeat_vector.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/callbacks/progbar_logger.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/__init__.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/trainers/data_adapters/torch_data_loader_adapter.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/rnn/dropout_rnn_cell_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/reshaping/zero_padding1d_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/core/identity.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/nlp/neural_machine_translation_with_transformer.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/utils/naming.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/legacy/saving/saving_options.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/demo_torch_multi_gpu.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/optimizers/ftrl.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/utils/code_stats_test.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./examples/keras_io/tensorflow/vision/integrated_gradients.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/tensorflow/nn.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/layers/rnn/gru.py
-1
python
keras-team/keras
18,951
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
gradient_accumulation
e45c1382c85c8b53676fcb40fc270c839e1690d8
c3d269b308b40314e8e128a7116a7f3fdf0ca212
2023-12-17 00:05:51+00:00
2023-12-18 23:21:51+00:00
Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch.
./keras/backend/common/variables.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/convolutional/separable_conv2d.py
1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/convolutional/depthwise_conv2d.py
1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/convolutional/conv3d.py
1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/convolutional/conv2d.py
1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/convolutional/conv3d_transpose.py
1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/reshaping/zero_padding3d.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./examples/keras_io/tensorflow/vision/cutmix.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/mixed_precision/dtype_policy_test.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/ops/operation_test.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/pooling/global_max_pooling3d.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/backend/common/stateless_scope_test.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./integration_tests/basic_full_flow.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/optimizers/adamw_test.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/preprocessing/string_lookup_test.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./examples/keras_io/tensorflow/generative/ddim.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/backend/torch/optimizers/torch_adamax.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/normalization/layer_normalization.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/applications/efficientnet.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./examples/keras_io/vision/convmixer.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/optimizers/lion.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/preprocessing/discretization.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/applications/xception.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./examples/keras_io/timeseries/timeseries_weather_forecasting.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/rnn/conv_lstm_test.py
-1
python
keras-team/keras
18,928
Fix descrepancies in Conv Module docstrings regarding data_format
khdlr
19184e9a80f408e6812f0a21298892a25b14bf14
7d431cea42d13f189473f6c0d2d33eaa5228c40f
2023-12-12 02:07:05+00:00
2023-12-12 17:23:20+00:00
Fix descrepancies in Conv Module docstrings regarding data_format. Stumbled upon some inconsistencies in the docs for conv layers, e.g. Conv2d, which currently states > `"channels_last"` corresponds to inputs with shape `(batch_size, channels, height, width)` which seems wrong, especially since a bit further down, it mentions > Input Shape: If `data_format="channels_last"`: A 4D tensor with shape: `(batch_size, height, width, channels)` Similar inconsisties exist for other conv layers, namely conv3d, conv3d_transpose, depthwise_conv2d, separable_conv2d. So here's a simple Doc Fix to get the documentation for the `data_format` argument to conv layers in line with the expected input shape.
./keras/layers/pooling/average_pooling1d.py
-1
python