Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -9,11 +9,10 @@ pinned: false
|
|
9 |
|
10 |
<p align="center">
|
11 |
<a href="https://light.princeton.edu/publication/delta_prox/">
|
12 |
-
<img src="logo.svg" alt="Delta Prox" width="16.5%">
|
13 |
</a>  
|
14 |
</p>
|
15 |
|
16 |
-
|
17 |
<p align="center">
|
18 |
Differentiable Proximal Algorithm Modeling for Large-Scale Optimization
|
19 |
</p>
|
@@ -25,5 +24,4 @@ Differentiable Proximal Algorithm Modeling for Large-Scale Optimization
|
|
25 |
<a href="https://github.com/princeton-computational-imaging/Delta-Prox/tree/main/examples">Examples</a>
|
26 |
</p>
|
27 |
|
28 |
-
|
29 |
> β-Prox is a domain-specific language (DSL) and compiler that transforms optimization problems into differentiable proximal solvers. Departing from handwriting these solvers and differentiating via autograd, β-Prox requires only a few lines of code to define a solver that can be *specialized based on user requirements w.r.t memory constraints or training budget* by optimized algorithm unrolling, deep equilibrium learning, and deep reinforcement learning. β-Prox makes it easier to prototype different learning-based bi-level optimization problems for a diverse range of applications. We compare our framework against existing methods with naive implementations. β-Prox is significantly more compact in terms of lines of code and compares favorably in memory consumption in applications across domains.
|
|
|
9 |
|
10 |
<p align="center">
|
11 |
<a href="https://light.princeton.edu/publication/delta_prox/">
|
12 |
+
<img src="https://huggingface.co/spaces/delta-prox/README/raw/main/logo.svg" alt="Delta Prox" width="16.5%">
|
13 |
</a>  
|
14 |
</p>
|
15 |
|
|
|
16 |
<p align="center">
|
17 |
Differentiable Proximal Algorithm Modeling for Large-Scale Optimization
|
18 |
</p>
|
|
|
24 |
<a href="https://github.com/princeton-computational-imaging/Delta-Prox/tree/main/examples">Examples</a>
|
25 |
</p>
|
26 |
|
|
|
27 |
> β-Prox is a domain-specific language (DSL) and compiler that transforms optimization problems into differentiable proximal solvers. Departing from handwriting these solvers and differentiating via autograd, β-Prox requires only a few lines of code to define a solver that can be *specialized based on user requirements w.r.t memory constraints or training budget* by optimized algorithm unrolling, deep equilibrium learning, and deep reinforcement learning. β-Prox makes it easier to prototype different learning-based bi-level optimization problems for a diverse range of applications. We compare our framework against existing methods with naive implementations. β-Prox is significantly more compact in terms of lines of code and compares favorably in memory consumption in applications across domains.
|