aaronb commited on
Commit
9578f60
β€’
1 Parent(s): 62d9933

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -9,11 +9,10 @@ pinned: false
9
 
10
  <p align="center">
11
  <a href="https://light.princeton.edu/publication/delta_prox/">
12
- <img src="logo.svg" alt="Delta Prox" width="16.5%">
13
  </a> &ensp;
14
  </p>
15
 
16
-
17
  <p align="center">
18
  Differentiable Proximal Algorithm Modeling for Large-Scale Optimization
19
  </p>
@@ -25,5 +24,4 @@ Differentiable Proximal Algorithm Modeling for Large-Scale Optimization
25
  <a href="https://github.com/princeton-computational-imaging/Delta-Prox/tree/main/examples">Examples</a>
26
  </p>
27
 
28
-
29
  > βˆ‡-Prox is a domain-specific language (DSL) and compiler that transforms optimization problems into differentiable proximal solvers. Departing from handwriting these solvers and differentiating via autograd, βˆ‡-Prox requires only a few lines of code to define a solver that can be *specialized based on user requirements w.r.t memory constraints or training budget* by optimized algorithm unrolling, deep equilibrium learning, and deep reinforcement learning. βˆ‡-Prox makes it easier to prototype different learning-based bi-level optimization problems for a diverse range of applications. We compare our framework against existing methods with naive implementations. βˆ‡-Prox is significantly more compact in terms of lines of code and compares favorably in memory consumption in applications across domains.
 
9
 
10
  <p align="center">
11
  <a href="https://light.princeton.edu/publication/delta_prox/">
12
+ <img src="https://huggingface.co/spaces/delta-prox/README/raw/main/logo.svg" alt="Delta Prox" width="16.5%">
13
  </a> &ensp;
14
  </p>
15
 
 
16
  <p align="center">
17
  Differentiable Proximal Algorithm Modeling for Large-Scale Optimization
18
  </p>
 
24
  <a href="https://github.com/princeton-computational-imaging/Delta-Prox/tree/main/examples">Examples</a>
25
  </p>
26
 
 
27
  > βˆ‡-Prox is a domain-specific language (DSL) and compiler that transforms optimization problems into differentiable proximal solvers. Departing from handwriting these solvers and differentiating via autograd, βˆ‡-Prox requires only a few lines of code to define a solver that can be *specialized based on user requirements w.r.t memory constraints or training budget* by optimized algorithm unrolling, deep equilibrium learning, and deep reinforcement learning. βˆ‡-Prox makes it easier to prototype different learning-based bi-level optimization problems for a diverse range of applications. We compare our framework against existing methods with naive implementations. βˆ‡-Prox is significantly more compact in terms of lines of code and compares favorably in memory consumption in applications across domains.