Tijmen2 commited on
Commit
011b5fb
1 Parent(s): 42c5811

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -13,3 +13,87 @@ base_model:
13
  ---
14
 
15
  # AstroSage-Llama-3.1-8B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
  # AstroSage-Llama-3.1-8B
16
+
17
+ <INSERT PAPER LINK HERE>
18
+
19
+ AstroSage-Llama-3.1-8B is a domain-specialized natural-language AI assistant
20
+ tailored for research in astronomy, astrophysics, and cosmology. Trained on the
21
+ complete collection of astronomy-related arXiv papers from 2007-2024 along with
22
+ millions of synthetically-generated question-answer pairs and other
23
+ astronomical literature, AstroSage-Llama-3.1-8B demonstrates remarkable
24
+ proficiency on a wide range of questions. AstroSage-Llama-3.1-8B scores 80.9%
25
+ on the AstroMLab-1 benchmark, greatly outperforming all models---proprietary
26
+ and open-weight---in the 8-billion parameter class, and performing on par with
27
+ GPT-4o. This achievement demonstrates the potential of domain specialization in
28
+ AI, suggesting that focused training can yield capabilities exceeding those of
29
+ much larger, general-purpose models. AstroSage-Llama-3.1-8B is freely
30
+ available, enabling widespread access to advanced AI capabilities for
31
+ astronomical education and research.
32
+
33
+ ## Model Details
34
+ - **Model Type**: Astronomy-specialized LLM
35
+ - **Base Model**: Meta-Llama-3.1-8B
36
+ - **Parameters**: 8 billion
37
+ - **Training Focus**: Astronomy, Astrophysics, Cosmology, and Astronomical Instrumentation
38
+ - **License**: Llama 3.1 Community License
39
+ - **Development Process**:
40
+ 1. Continued Pre-training (CPT) on astronomical literature
41
+ 2. Supervised Fine-tuning (SFT) on QA pairs and instruction sets
42
+ 3. Model merging with Meta-Llama-3.1-8B-Instruct (75% CPT+SFT / 25% Meta-Instruct)
43
+
44
+ ## Performance
45
+ - **AstroMLab-1 Benchmark**: 80.9% accuracy
46
+ - Outperforms all 8B parameter models
47
+ - Comparable to GPT-4o (80.4%)
48
+ - ~1000x more cost-effective than proprietary models
49
+ - 8 percentage-point improvement over base model
50
+ - **General Capabilities**: Maintains strong performance on standard benchmarks
51
+ - IF-EVAL: 41.4%
52
+ - BBH: 52.9%
53
+ - MATH: 8.4%
54
+ - GPQA: 31.2%
55
+ - MUSR: 38.9%
56
+ - MMLU-PRO: 34.6%
57
+
58
+ ## Training Data
59
+ - **Continued Pre-training**:
60
+ - ~250,000 arXiv preprints (2007-2024) from astro-ph and gr-qc
61
+ - Astronomy-related Wikipedia articles
62
+ - Selected astronomy textbooks
63
+ - Total: 3.3 billion tokens, 19.9 GB plaintext
64
+ - **Supervised Fine-tuning**:
65
+ - 8.8 million curated QA pairs
66
+ - Filtered Infinity-Instruct-7M dataset
67
+ - Paper summaries and metadata
68
+ - Total: 2.0 billion tokens, 9.8 GB plaintext
69
+
70
+ ## Intended Use
71
+ - Curiosity-driven question answering
72
+ - Brainstorming new ideas
73
+ - Astronomical research assistance
74
+ - Educational support in astronomy
75
+ - Literature review and summarization
76
+ - Domain-specific question answering
77
+ - Scientific explanation of concepts
78
+
79
+ ## Limitations
80
+ - As with all LLMs, hallucinations are possible
81
+ - Limited by 8B parameter size for complex reasoning
82
+ - Paper metadata not perfectly memorized
83
+ - Performance primarily validated on multiple-choice questions
84
+ - Training data cutoff: January 2024
85
+ - English-only capabilities
86
+
87
+ ## Ethical Considerations
88
+ - Should not be used as sole source for critical research decisions
89
+ - Output should be verified against primary sources
90
+ - May reflect biases present in astronomical literature
91
+
92
+ ## Technical Specifications
93
+ - Architecture: Based on Meta-Llama 3.1
94
+ - Training Infrastructure: ORNL OLCF Frontier
95
+ - Hosting: Hugging Face Hub (AstroMLab/AstroSage-8B)
96
+
97
+ ## Citation and Contact
98
+ - Corresponding author: Tijmen de Haan <tijmen.dehaan at gmail dot com>
99
+ - Please cite the AstroMLab 3 paper when using this model.