Yuan (Cyrus) Chiang commited on
Commit
b5053f0
·
unverified ·
1 Parent(s): da724dc

Fix MD kwargs (#52)

Browse files

* add pypi badge

* move task page buttom

* fix md kwargs

* patch stability input module

* add one script linux installation

* add list of implemented tasks

* enforce release using workflow dispatch

.github/README.md CHANGED
@@ -1,6 +1,7 @@
1
  <div align="center">
2
  <h1>MLIP Arena</h1>
3
  <a href="https://github.com/atomind-ai/mlip-arena/actions"><img alt="GitHub Actions Workflow Status" src="https://img.shields.io/github/actions/workflow/status/atomind-ai/mlip-arena/test.yaml"></a>
 
4
  <a href="https://zenodo.org/doi/10.5281/zenodo.13704399"><img src="https://zenodo.org/badge/776930320.svg" alt="DOI"></a>
5
  <a href="https://huggingface.co/spaces/atomind/mlip-arena"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-blue" alt="Hugging Face"></a>
6
  <!-- <a href="https://discord.gg/W8WvdQtT8T"><img alt="Discord" src="https://img.shields.io/discord/1299613474820984832?logo=discord"> -->
@@ -30,6 +31,15 @@ pip install mlip-arena
30
  **Linux**
31
 
32
  ```bash
 
 
 
 
 
 
 
 
 
33
  git clone https://github.com/atomind-ai/mlip-arena.git
34
  cd mlip-arena
35
  pip install torch==2.2.0
@@ -87,6 +97,19 @@ for model in MLIPEnum:
87
  results.append(result)
88
  ```
89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
90
  ## Contribute
91
 
92
  MLIP Arena is now in pre-alpha. If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu).
 
1
  <div align="center">
2
  <h1>MLIP Arena</h1>
3
  <a href="https://github.com/atomind-ai/mlip-arena/actions"><img alt="GitHub Actions Workflow Status" src="https://img.shields.io/github/actions/workflow/status/atomind-ai/mlip-arena/test.yaml"></a>
4
+ <a href="https://pypi.org/project/mlip-arena/"><img alt="PyPI - Version" src="https://img.shields.io/pypi/v/mlip-arena"></a>
5
  <a href="https://zenodo.org/doi/10.5281/zenodo.13704399"><img src="https://zenodo.org/badge/776930320.svg" alt="DOI"></a>
6
  <a href="https://huggingface.co/spaces/atomind/mlip-arena"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-blue" alt="Hugging Face"></a>
7
  <!-- <a href="https://discord.gg/W8WvdQtT8T"><img alt="Discord" src="https://img.shields.io/discord/1299613474820984832?logo=discord"> -->
 
31
  **Linux**
32
 
33
  ```bash
34
+ # (Optional) Install uv
35
+ curl -LsSf https://astral.sh/uv/install.sh | sh
36
+ source $HOME/.local/bin/env
37
+ # One script uv pip installation
38
+ bash scripts/install-linux.sh
39
+ ```
40
+
41
+ ```bash
42
+ # Or from command line
43
  git clone https://github.com/atomind-ai/mlip-arena.git
44
  cd mlip-arena
45
  pip install torch==2.2.0
 
97
  results.append(result)
98
  ```
99
 
100
+ ### List of implemented tasks
101
+
102
+ The implemented tasks are available under `mlip_arena.tasks.<module>.run` or `from mlip_arena.tasks import *` for convenient imports (currently doesn't work if [phonopy](https://phonopy.github.io/phonopy/install.html) is not installed).
103
+
104
+ - [OPT](../mlip_arena/tasks/optimize.py#L56): Structure optimization
105
+ - [EOS](../mlip_arena/tasks/eos.py#L42): Equation of state (energy-volume scan)
106
+ - [MD](../mlip_arena/tasks/md.py#L200): Molecular dynamics with flexible dynamics (NVE, NVT, NPT) and temperature/pressure scheduling (annealing, shearing, *etc*)
107
+ - [PHONON](../mlip_arena/tasks/phonon.py#L110): Phonon calculation driven by [phonopy](https://phonopy.github.io/phonopy/install.html)
108
+ - [NEB](../mlip_arena/tasks/neb.py#L96): Nudged elastic band
109
+ - [NEB_FROM_ENDPOINTS](../mlip_arena/tasks/neb.py#L164): Nudge elastic band with convenient image interpolation (linear or IDPP)
110
+ - [ELASTICITY](../mlip_arena/tasks/elasticity.py#L78): Elastic tensor calculation
111
+
112
+
113
  ## Contribute
114
 
115
  MLIP Arena is now in pre-alpha. If you're interested in joining the effort, please reach out to Yuan at [cyrusyc@berkeley.edu](mailto:cyrusyc@berkeley.edu).
.github/workflows/release.yaml CHANGED
@@ -1,74 +1,20 @@
1
  name: Publish Release
2
 
3
  on:
4
- workflow_run:
5
- workflows: [Python Test]
6
- types: [completed]
 
7
  workflow_dispatch:
8
 
9
  permissions:
10
  contents: write # Ensure write access to push tags
11
 
12
  jobs:
13
- release:
14
- name: Create GitHub Release
15
- runs-on: ubuntu-latest
16
-
17
- steps:
18
- # Step 1: Checkout the code
19
- - name: Checkout code
20
- uses: actions/checkout@v3
21
-
22
- # Step 2: Set up Python
23
- - name: Set up Python
24
- uses: actions/setup-python@v4
25
- with:
26
- python-version: '3.x'
27
-
28
- # Step 3: Install dependencies
29
- - name: Install dependencies
30
- run: pip install toml
31
-
32
- # Step 4: Extract version from pyproject.toml
33
- - name: Extract version
34
- id: get_version
35
- run: |
36
- VERSION=$(python -c "import toml; print(toml.load('pyproject.toml')['project']['version'])")
37
- echo "VERSION=$VERSION" >> $GITHUB_ENV
38
-
39
- # Step 5: Check if tag exists on remote
40
- - name: Check if tag exists on remote
41
- id: check_tag
42
- run: |
43
- if git ls-remote --tags origin | grep "refs/tags/v${{ env.VERSION }}"; then
44
- echo "Tag v${{ env.VERSION }} already exists on remote."
45
- echo "tag_exists=true" >> $GITHUB_ENV
46
- else
47
- echo "tag_exists=false" >> $GITHUB_ENV
48
- fi
49
-
50
- # Step 6: Create and push a new tag (if it doesn't exist)
51
- - name: Create Git tag
52
- if: env.tag_exists == 'false'
53
- run: |
54
- git config --global user.name "github-actions[bot]"
55
- git config --global user.email "github-actions[bot]@users.noreply.github.com"
56
- git tag -a "v${{ env.VERSION }}" -m "Release v${{ env.VERSION }}"
57
- git push origin "v${{ env.VERSION }}"
58
-
59
- # Step 7: Create GitHub release (if tag didn't exist)
60
- - name: Create GitHub Release
61
- if: env.tag_exists == 'false'
62
- uses: softprops/action-gh-release@v1
63
- with:
64
- tag_name: "v${{ env.VERSION }}"
65
- env:
66
- GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
67
-
68
  pypi:
69
  name: Publish to PyPI
70
  runs-on: ubuntu-latest
71
- needs: release # This job runs after the release job
72
 
73
  steps:
74
  # Step 1: Checkout the code
 
1
  name: Publish Release
2
 
3
  on:
4
+ # workflow_run:
5
+ # workflows: [Python Test]
6
+ # branches: [main]
7
+ # types: [completed]
8
  workflow_dispatch:
9
 
10
  permissions:
11
  contents: write # Ensure write access to push tags
12
 
13
  jobs:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  pypi:
15
  name: Publish to PyPI
16
  runs-on: ubuntu-latest
17
+ # needs: release # This job runs after the release job
18
 
19
  steps:
20
  # Step 1: Checkout the code
mlip_arena/tasks/md.py CHANGED
@@ -157,29 +157,29 @@ def _get_ensemble_defaults(
157
  dynamics: str | MolecularDynamics,
158
  t_schedule: np.ndarray,
159
  p_schedule: np.ndarray,
160
- ase_md_kwargs: dict | None = None,
161
  ) -> dict:
162
  """Update ASE MD kwargs"""
163
- ase_md_kwargs = ase_md_kwargs or {}
164
 
165
  if ensemble == "nve":
166
- ase_md_kwargs.pop("temperature", None)
167
- ase_md_kwargs.pop("temperature_K", None)
168
- ase_md_kwargs.pop("externalstress", None)
169
  elif ensemble == "nvt":
170
- ase_md_kwargs["temperature_K"] = t_schedule[0]
171
- ase_md_kwargs.pop("externalstress", None)
172
  elif ensemble == "npt":
173
- ase_md_kwargs["temperature_K"] = t_schedule[0]
174
- ase_md_kwargs["externalstress"] = p_schedule[0] # * 1e3 * units.bar
175
 
176
  if isinstance(dynamics, str) and dynamics.lower() == "langevin":
177
- ase_md_kwargs["friction"] = ase_md_kwargs.get(
178
  "friction",
179
  10.0 * 1e-3 / units.fs, # Same default as in VASP: 10 ps^-1
180
  )
181
 
182
- return ase_md_kwargs
183
 
184
 
185
  def _generate_task_run_name():
@@ -206,8 +206,8 @@ def run(
206
  total_time: float = 1000, # fs
207
  temperature: float | Sequence | np.ndarray | None = 300.0, # K
208
  pressure: float | Sequence | np.ndarray | None = None, # eV/A^3
209
- ase_md_kwargs: dict | None = None,
210
- md_velocity_seed: int | None = None,
211
  zero_linear_momentum: bool = True,
212
  zero_angular_momentum: bool = True,
213
  traj_file: str | Path | None = None,
@@ -235,12 +235,12 @@ def run(
235
  pressure=pressure,
236
  )
237
 
238
- ase_md_kwargs = _get_ensemble_defaults(
239
  ensemble=ensemble,
240
  dynamics=dynamics,
241
  t_schedule=t_schedule,
242
  p_schedule=p_schedule,
243
- ase_md_kwargs=ase_md_kwargs,
244
  )
245
 
246
  if isinstance(dynamics, str):
@@ -289,7 +289,7 @@ def run(
289
  MaxwellBoltzmannDistribution(
290
  atoms=atoms,
291
  temperature_K=t_schedule[last_step],
292
- rng=np.random.default_rng(seed=md_velocity_seed),
293
  )
294
 
295
  if zero_linear_momentum:
@@ -303,7 +303,7 @@ def run(
303
  MaxwellBoltzmannDistribution(
304
  atoms=atoms,
305
  temperature_K=t_schedule[last_step],
306
- rng=np.random.default_rng(seed=md_velocity_seed),
307
  )
308
 
309
  if zero_linear_momentum:
@@ -314,7 +314,7 @@ def run(
314
  md_runner = md_class(
315
  atoms=atoms,
316
  timestep=time_step * units.fs,
317
- **ase_md_kwargs,
318
  )
319
 
320
  if traj_file is not None:
 
157
  dynamics: str | MolecularDynamics,
158
  t_schedule: np.ndarray,
159
  p_schedule: np.ndarray,
160
+ dynamics_kwargs: dict | None = None,
161
  ) -> dict:
162
  """Update ASE MD kwargs"""
163
+ dynamics_kwargs = dynamics_kwargs or {}
164
 
165
  if ensemble == "nve":
166
+ dynamics_kwargs.pop("temperature", None)
167
+ dynamics_kwargs.pop("temperature_K", None)
168
+ dynamics_kwargs.pop("externalstress", None)
169
  elif ensemble == "nvt":
170
+ dynamics_kwargs["temperature_K"] = t_schedule[0]
171
+ dynamics_kwargs.pop("externalstress", None)
172
  elif ensemble == "npt":
173
+ dynamics_kwargs["temperature_K"] = t_schedule[0]
174
+ dynamics_kwargs["externalstress"] = p_schedule[0] # * 1e3 * units.bar
175
 
176
  if isinstance(dynamics, str) and dynamics.lower() == "langevin":
177
+ dynamics_kwargs["friction"] = dynamics_kwargs.get(
178
  "friction",
179
  10.0 * 1e-3 / units.fs, # Same default as in VASP: 10 ps^-1
180
  )
181
 
182
+ return dynamics_kwargs
183
 
184
 
185
  def _generate_task_run_name():
 
206
  total_time: float = 1000, # fs
207
  temperature: float | Sequence | np.ndarray | None = 300.0, # K
208
  pressure: float | Sequence | np.ndarray | None = None, # eV/A^3
209
+ dynamics_kwargs: dict | None = None,
210
+ velocity_seed: int | None = None,
211
  zero_linear_momentum: bool = True,
212
  zero_angular_momentum: bool = True,
213
  traj_file: str | Path | None = None,
 
235
  pressure=pressure,
236
  )
237
 
238
+ dynamics_kwargs = _get_ensemble_defaults(
239
  ensemble=ensemble,
240
  dynamics=dynamics,
241
  t_schedule=t_schedule,
242
  p_schedule=p_schedule,
243
+ dynamics_kwargs=dynamics_kwargs,
244
  )
245
 
246
  if isinstance(dynamics, str):
 
289
  MaxwellBoltzmannDistribution(
290
  atoms=atoms,
291
  temperature_K=t_schedule[last_step],
292
+ rng=np.random.default_rng(seed=velocity_seed),
293
  )
294
 
295
  if zero_linear_momentum:
 
303
  MaxwellBoltzmannDistribution(
304
  atoms=atoms,
305
  temperature_K=t_schedule[last_step],
306
+ rng=np.random.default_rng(seed=velocity_seed),
307
  )
308
 
309
  if zero_linear_momentum:
 
314
  md_runner = md_class(
315
  atoms=atoms,
316
  timestep=time_step * units.fs,
317
+ **dynamics_kwargs,
318
  )
319
 
320
  if traj_file is not None:
mlip_arena/tasks/stability/input.py ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from pathlib import Path
3
+ from typing import Generator, Iterable
4
+
5
+ from huggingface_hub import HfApi, hf_hub_download
6
+ from prefect import task
7
+
8
+ from ase import Atoms
9
+ from ase.db import connect
10
+ from mlip_arena.tasks.utils import logger
11
+
12
+
13
+ def save_to_db(
14
+ atoms_list: list[Atoms] | Iterable[Atoms] | Atoms,
15
+ db_path: Path | str,
16
+ upload: bool = True,
17
+ hf_token: str | None = os.getenv("HF_TOKEN", None),
18
+ repo_id: str = "atomind/mlip-arena",
19
+ repo_type: str = "dataset",
20
+ subfolder: str = Path(__file__).parent.name,
21
+ ):
22
+ """Save ASE Atoms objects to an ASE database and optionally upload to Hugging Face Hub."""
23
+
24
+ if upload and hf_token is None:
25
+ raise ValueError("HF_TOKEN is required to upload the database.")
26
+
27
+ db_path = Path(db_path)
28
+
29
+ if isinstance(atoms_list, Atoms):
30
+ atoms_list = [atoms_list]
31
+
32
+ with connect(db_path) as db:
33
+ for atoms in atoms_list:
34
+ if not isinstance(atoms, Atoms):
35
+ raise ValueError("atoms_list must contain ASE Atoms objects.")
36
+ db.write(atoms)
37
+
38
+ if upload:
39
+ api = HfApi(token=hf_token)
40
+ api.upload_file(
41
+ path_or_fileobj=db_path,
42
+ path_in_repo=f"{subfolder}/{db_path.name}",
43
+ repo_id=repo_id,
44
+ repo_type=repo_type,
45
+ )
46
+ logger.info(f"{db_path.name} uploaded to {repo_id}/{subfolder}")
47
+
48
+ return db_path
49
+
50
+
51
+ @task
52
+ def get_atoms_from_db(
53
+ db_path: Path | str,
54
+ hf_token: str | None = os.getenv("HF_TOKEN", None),
55
+ repo_id: str = "atomind/mlip-arena",
56
+ repo_type: str = "dataset",
57
+ subfolder: str = Path(__file__).parent.name,
58
+ force_download: bool = False,
59
+ ) -> Generator[Atoms, None, None]:
60
+ """Retrieve ASE Atoms objects from an ASE database."""
61
+ db_path = Path(db_path)
62
+ if not db_path.exists():
63
+ db_path = hf_hub_download(
64
+ repo_id=repo_id,
65
+ repo_type=repo_type,
66
+ subfolder=subfolder,
67
+ # local_dir=db_path.parent,
68
+ filename=db_path.name,
69
+ token=hf_token,
70
+ force_download=force_download,
71
+ )
72
+ with connect(db_path) as db:
73
+ for row in db.select():
74
+ yield row.toatoms()
scripts/install-linux.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ TORCH=2.2
2
+ CUDA=cu121
3
+ uv pip install torch==${TORCH}.0
4
+ uv pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}.0+${CUDA}.html
5
+ uv pip install torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}.0+${CUDA}.html
6
+ uv pip install dgl -f https://data.dgl.ai/wheels/torch-${TORCH}/${CUDA}/repo.html
7
+ uv pip install -e .[test]
8
+ uv pip install -e .[mace]
serve/leaderboard.py CHANGED
@@ -117,13 +117,14 @@ for task in TASKS:
117
 
118
  task_module = importlib.import_module(f"ranks.{TASKS[task]['rank-page']}")
119
 
 
 
 
 
 
 
120
  # Call the function from the imported module
121
  if hasattr(task_module, "render"):
122
- st.page_link(
123
- f"tasks/{TASKS[task]['task-page']}.py",
124
- label="Go to the associated task page",
125
- icon=":material/link:",
126
- )
127
  task_module.render()
128
  # if st.button(f"Go to task page"):
129
  # st.switch_page(f"tasks/{TASKS[task]['task-page']}.py")
 
117
 
118
  task_module = importlib.import_module(f"ranks.{TASKS[task]['rank-page']}")
119
 
120
+ st.page_link(
121
+ f"tasks/{TASKS[task]['task-page']}.py",
122
+ label="Go to the associated task page",
123
+ icon=":material/link:",
124
+ )
125
+
126
  # Call the function from the imported module
127
  if hasattr(task_module, "render"):
 
 
 
 
 
128
  task_module.render()
129
  # if st.button(f"Go to task page"):
130
  # st.switch_page(f"tasks/{TASKS[task]['task-page']}.py")
tests/test_md.py CHANGED
@@ -23,6 +23,7 @@ def test_nve(model: MLIPEnum):
23
  dynamics="velocityverlet",
24
  total_time=10,
25
  time_step=2,
 
26
  )
27
 
28
  assert isinstance(result["atoms"].get_potential_energy(), float)
 
23
  dynamics="velocityverlet",
24
  total_time=10,
25
  time_step=2,
26
+ dynamics_kwargs={},
27
  )
28
 
29
  assert isinstance(result["atoms"].get_potential_energy(), float)