matlok
's Collections
Papers - ResNet
updated
Paper
•
1605.07146
•
Published
•
2
Characterizing signal propagation to close the performance gap in
unnormalized ResNets
Paper
•
2101.08692
•
Published
•
2
Pareto-Optimal Quantized ResNet Is Mostly 4-bit
Paper
•
2105.03536
•
Published
•
2
When Vision Transformers Outperform ResNets without Pre-training or
Strong Data Augmentations
Paper
•
2106.01548
•
Published
•
2
ResNet strikes back: An improved training procedure in timm
Paper
•
2110.00476
•
Published
•
2
A ResNet is All You Need? Modeling A Strong Baseline for Detecting
Referable Diabetic Retinopathy in Fundus Images
Paper
•
2210.03180
•
Published
Deep Residual Learning for Image Recognition
Paper
•
1512.03385
•
Published
•
6
Revisiting ResNets: Improved Training and Scaling Strategies
Paper
•
2103.07579
•
Published
•
2
Densely Connected Convolutional Networks
Paper
•
1608.06993
•
Published
•
3
Aggregated Residual Transformations for Deep Neural Networks
Paper
•
1611.05431
•
Published
•
2
RTSeg: Real-time Semantic Segmentation Comparative Study
Paper
•
1803.02758
•
Published
•
2
Latent Diffusion Model for Medical Image Standardization and Enhancement
Paper
•
2310.05237
•
Published
•
2
3D Medical Image Segmentation based on multi-scale MPU-Net
Paper
•
2307.05799
•
Published
•
2
Joint Liver and Hepatic Lesion Segmentation in MRI using a Hybrid CNN
with Transformer Layers
Paper
•
2201.10981
•
Published
•
2
Bootstrap your own latent: A new approach to self-supervised Learning
Paper
•
2006.07733
•
Published
•
2
From Modern CNNs to Vision Transformers: Assessing the Performance,
Robustness, and Classification Strategies of Deep Learning Models in
Histopathology
Paper
•
2204.05044
•
Published
•
2
Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology
Paper
•
2203.00585
•
Published
•
2
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Paper
•
1905.11946
•
Published
•
3
DAS: A Deformable Attention to Capture Salient Information in CNNs
Paper
•
2311.12091
•
Published
•
2
Semi-Supervised Semantic Segmentation using Redesigned Self-Training for
White Blood Cells
Paper
•
2401.07278
•
Published
•
2
Adding Conditional Control to Text-to-Image Diffusion Models
Paper
•
2302.05543
•
Published
•
40
Data Distributional Properties Drive Emergent In-Context Learning in
Transformers
Paper
•
2205.05055
•
Published
•
2
CascadeTabNet: An approach for end to end table detection and structure
recognition from image-based documents
Paper
•
2004.12629
•
Published
•
2
Realism in Action: Anomaly-Aware Diagnosis of Brain Tumors from Medical
Images Using YOLOv8 and DeiT
Paper
•
2401.03302
•
Published
•
1
Detecting and recognizing characters in Greek papyri with YOLOv8, DeiT
and SimCLR
Paper
•
2401.12513
•
Published
•
1
DeiT-LT Distillation Strikes Back for Vision Transformer Training on
Long-Tailed Datasets
Paper
•
2404.02900
•
Published
•
1
DeiT III: Revenge of the ViT
Paper
•
2204.07118
•
Published
•
1
Transferable and Principled Efficiency for Open-Vocabulary Segmentation
Paper
•
2404.07448
•
Published
•
11
ConsistencyDet: Robust Object Detector with Denoising Paradigm of
Consistency Model
Paper
•
2404.07773
•
Published
•
1
Long-form music generation with latent diffusion
Paper
•
2404.10301
•
Published
•
24
GLIGEN: Open-Set Grounded Text-to-Image Generation
Paper
•
2301.07093
•
Published
•
3
A Multimodal Automated Interpretability Agent
Paper
•
2404.14394
•
Published
•
20
What needs to go right for an induction head? A mechanistic study of
in-context learning circuits and their formation
Paper
•
2404.07129
•
Published
•
3
Multiplication-Free Transformer Training via Piecewise Affine Operations
Paper
•
2305.17190
•
Published
•
2
Large Scale GAN Training for High Fidelity Natural Image Synthesis
Paper
•
1809.11096
•
Published
•
1
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
Paper
•
1707.02968
•
Published
•
1
Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept
Space
Paper
•
2406.19370
•
Published
•
1
Paper
•
2303.14027
•
Published
•
1
Equivariant Transformer Networks
Paper
•
1901.11399
•
Published
•
1
Fixup Initialization: Residual Learning Without Normalization
Paper
•
1901.09321
•
Published
•
1
RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and
Out Distribution Robustness
Paper
•
2206.14502
•
Published
•
1
Geodesic Multi-Modal Mixup for Robust Fine-Tuning
Paper
•
2203.03897
•
Published
•
1