Locutusque's picture
Change to a nicer HTML and CSS style model card
390fde4 verified
---
library_name: transformers
license: other
datasets:
- Locutusque/hercules-v4.0
language:
- en
---
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
line-height: 1.6;
color: #f5f5f5;
background-color: #1e2a36;
margin: 0;
padding: 0;
}
.container {
max-width: 1200px;
margin: 20px auto;
padding: 20px;
background-color: #2a3f54;
border-radius: 8px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
display: flex;
flex-wrap: wrap;
justify-content: space-between;
}
h1 {
font-size: 2.5rem;
color: #51a3d3;
text-align: center;
margin-bottom: 30px;
width: 100%;
}
h2 {
font-size: 1.75rem;
margin: 20px 0;
color: #63b8ea;
padding-bottom: 10px;
}
h3 {
font-size: 1.25rem;
color: #80c8f4;
}
p, a {
font-size: 1rem;
}
p {
color: #b0c2ce;
margin-bottom: 20px;
}
ul {
list-style-type: none;
padding: 0;
display: flex;
flex-wrap: wrap;
justify-content: space-between;
width: 100%;
}
li {
background-color: #34495e;
padding: 20px;
margin-bottom: 10px;
border-radius: 4px;
cursor: pointer;
transition: background-color 0.3s ease, color 0.3s ease;
overflow: hidden;
color: #b0c2ce;
width: calc(50% - 10px);
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
li:hover {
background-color: #4e6a81;
color: #dfe8f1;
}
.section-content {
margin-top: 15px;
border-top: 1px solid #4e6a81;
padding-top: 10px;
}
a {
color: #a4c8e1;
text-decoration: none;
}
a:hover {
text-decoration: underline;
}
pre {
background-color: #2c3e50;
padding: 10px;
border-radius: 5px;
overflow-x: auto;
color: #b0c2ce;
}
</style>
<div class="container">
<h1>Hercules-Qwen1.5-14B</h1>
</div>
<ul>
<li>
<h2>Model Details</h2>
<div class="section-content">
<h3>Model Description</h3>
<p>This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using 700,000 examples of Hercules-v4.</p>
<p><strong>Developed by:</strong> M4-ai</p>
<p><strong>Language(s) (NLP):</strong> English and maybe Chinese</p>
<p><strong>License:</strong> tongyi-qianwen license</p>
<p><strong>Finetuned from model:</strong> <a href="https://huggingface.co/Qwen/Qwen1.5-14B">Qwen1.5-14B</a></p>
</div>
</li>
<li>
<h2>Uses</h2>
<div class="section-content">
<p>General purpose assistant, question answering, chain-of-thought, etc..</p>
<h3>Recommendations</h3>
<p>Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.</p>
</div>
</li>
<li>
<h2>Evaluation</h2>
<div class="section-content">
<p>Coming soon</p>
</div>
</li>
<li>
<h2>Training Details</h2>
<div class="section-content">
<h3>Training Data</h3>
<p><a href="https://huggingface.co/datasets/Locutusque/hercules-v4.0">https://huggingface.co/datasets/Locutusque/hercules-v4.0</a></p>
<h4>Training Hyperparameters</h4>
<p><strong>Training regime:</strong> bf16 non-mixed precision</p>
</div>
</li>
<li>
<h2>Technical Specifications</h2>
<div class="section-content">
<h4>Hardware</h4>
<p>We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 1024</p>
</div>
</li>
<li>
<h2>Contributions</h2>
<div class="section-content">
<p>Thanks to @Tonic, @aloobun, @fhai50032, and @Locutusque for their contributions to this model.</p>
</div>
</li>
</ul>