Update README.md
Browse files
README.md
CHANGED
@@ -10,12 +10,24 @@ pinned: false
|
|
10 |
Welcome to the llmware HuggingFace page. We believe that the ascendence of LLMs creates a major new application pattern and data
|
11 |
pipelines that will be transformative in the enterprise, especially in knowledge-intensive industries. Our open source research efforts
|
12 |
are focused both on the new "ware" ("middleware" and "software" that will wrap and integrate LLMs), as well as building high-quality
|
13 |
-
automation-focused enterprise RAG models.
|
14 |
|
15 |
-
Our
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
|
18 |
Please check out a few of our recent blog postings related to these initiatives:
|
|
|
|
|
|
|
|
|
19 |
[BLING](https://medium.com/@darrenoberst/small-instruct-following-llms-for-rag-use-case-54c55e4b41a8) |
|
20 |
[RAG-INSTRUCT-TEST-DATASET](https://medium.com/@darrenoberst/how-accurate-is-rag-8f0706281fd9) |
|
21 |
[LLMWARE EMERGING STACK](https://medium.com/@darrenoberst/the-emerging-llm-stack-for-rag-deee093af5fa) |
|
|
|
10 |
Welcome to the llmware HuggingFace page. We believe that the ascendence of LLMs creates a major new application pattern and data
|
11 |
pipelines that will be transformative in the enterprise, especially in knowledge-intensive industries. Our open source research efforts
|
12 |
are focused both on the new "ware" ("middleware" and "software" that will wrap and integrate LLMs), as well as building high-quality
|
13 |
+
automation-focused enterprise Agent, RAG and embedding models.
|
14 |
|
15 |
+
Our model training initiatives fall into four major categories:
|
16 |
+
|
17 |
+
--SLIMs (Structured Language Instruction Models) - small, specialized function calling models for stacking in multi-model, Agent-based workflows
|
18 |
+
|
19 |
+
--BLING/DRAGON - highly-accurate fact-based question-answering models
|
20 |
+
|
21 |
+
--Industry-BERT - industry fine-tuned embedding models
|
22 |
+
|
23 |
+
--Private Inference Self-Hosting, Packaging and Quantization - GGUF, ONNX, OpenVino
|
24 |
|
25 |
|
26 |
Please check out a few of our recent blog postings related to these initiatives:
|
27 |
+
[SMALL MODEL ACCURACY BENCHMARK](https://medium.com/@darrenoberst/best-small-language-models-for-accuracy-and-enterprise-use-cases-benchmark-results-cf71964759c8) |
|
28 |
+
[OUR JOURNEY BUILDING ACCURATE ENTERPRISE SMALL MODELS](https://medium.com/@darrenoberst/building-the-most-accurate-small-language-models-our-journey-781474f64d88) |
|
29 |
+
[THINKING DOES NOT HAPPEN ONE TOKEN AT A TIME](https://medium.com/@darrenoberst/thinking-does-not-happen-one-token-at-a-time-0dd0c6a528ec) |
|
30 |
+
[SLIMs](https://medium.com/@darrenoberst/slims-small-specialized-models-function-calling-and-multi-model-agents-8c935b341398) |
|
31 |
[BLING](https://medium.com/@darrenoberst/small-instruct-following-llms-for-rag-use-case-54c55e4b41a8) |
|
32 |
[RAG-INSTRUCT-TEST-DATASET](https://medium.com/@darrenoberst/how-accurate-is-rag-8f0706281fd9) |
|
33 |
[LLMWARE EMERGING STACK](https://medium.com/@darrenoberst/the-emerging-llm-stack-for-rag-deee093af5fa) |
|