|
--- |
|
tags: |
|
- mae |
|
- crossmae |
|
datasets: |
|
- imagenet-1k |
|
--- |
|
|
|
## CrossMAE: Rethinking Patch Dependence for Masked Autoencoders |
|
by <a href="https://max-fu.github.io">Letian Fu*</a>, <a href="https://tonylian.com">Long Lian*</a>, <a href="https://renwang435.github.io">Renhao Wang</a>, <a href="https://bfshi.github.io">Baifeng Shi</a>, <a href="https://people.eecs.berkeley.edu/~xdwang">Xudong Wang</a>, <a href="https://www.adamyala.org">Adam Yala†</a>, <a href="https://people.eecs.berkeley.edu/~trevor">Trevor Darrell†</a>, <a href="https://people.eecs.berkeley.edu/~efros">Alexei A. Efros†</a>, <a href="https://goldberg.berkeley.edu">Ken Goldberg†</a> at UC Berkeley and UCSF |
|
|
|
[[Paper](https://arxiv.org/abs/2401.14391)] | [[Project Page](https://crossmae.github.io/)] | [[Citation](#citation)] |
|
|
|
|
|
<p align="center"> |
|
<img src="https://crossmae.github.io/crossmae2.jpg" width="800"> |
|
</p> |
|
|
|
This repo has the models for [CrossMAE: Rethinking Patch Dependence for Masked Autoencoders](https://arxiv.org/abs/2401.14391). |
|
|
|
Please take a look at the [GitHub repo](https://github.com/TonyLianLong/CrossMAE) to see instructions on pretraining, fine-tuning, and evaluation with these models. |
|
|