abhi-db commited on
Commit
588f833
·
verified ·
1 Parent(s): 566313b

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -58
README.md DELETED
@@ -1,58 +0,0 @@
1
- ---
2
- title: Databricks
3
- emoji: 🏢
4
- colorFrom: yellow
5
- colorTo: pink
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- # About Us
11
-
12
- With origins in academia and the open source community, Databricks was founded in 2013 by the original creators of Apache Spark™, Delta Lake and MLflow.
13
- Our Data Intelligence Platform unifies data, AI and governance to make it easy for enterprises to create AI applications that understand their data.
14
-
15
- Bolstered by the 2023 acquisition of MosaicML, the combined AI R&D teams – creators of the Dolly models and [IFT dataset](http://databricks-dolly-15k), and the
16
- [MPT family of models](https://huggingface.co/collections/mosaicml/mpt-6564f3d9e5aac326bfa22def) –
17
- are now known as Databricks Mosaic AI Research. We continue to use rigorous science and engineering to deliver state-of-the-art generative AI training
18
- and inference capabilities to organizations, while enabling them to retain control, security, and ownership over their valuable data.
19
-
20
- To get started with using models hosted on Hugging Face for training and inference on the Databricks platform,
21
- [sign up for a free trial](https://www.databricks.com/try-databricks)!
22
-
23
- # Resources
24
-
25
- ## [LLM Foundry](https://github.com/mosaicml/llm-foundry/tree/main)
26
-
27
- This repo contains code for training, finetuning, evaluating, and deploying LLMs for inference with [Composer](https://github.com/mosaicml/composer)
28
- on the Databricks Data Intelligence Platform.
29
-
30
- ## [Composer Library](https://github.com/mosaicml/composer)
31
-
32
- The open source Composer library makes it easy to train models faster at the algorithmic level. It is built on top of PyTorch.
33
- Use our collection of speedup methods in your own training loop or—for the best experience—with our Composer trainer.
34
-
35
- ## [StreamingDataset](https://github.com/mosaicml/streaming)
36
-
37
- Fast, accurate streaming of training data from cloud storage. We built `StreamingDataset` to make training on large datasets from cloud storage as fast,
38
- cheap, and scalable as possible.
39
-
40
- It’s specially designed for multi-node, distributed training for large models—maximizing correctness guarantees, performance, and ease of use.
41
- Now, you can efficiently train anywhere, independent of your training data location. Just stream in the data you need, when you need it.
42
- To learn more about why we built `StreamingDataset`, read our [announcement blog](https://www.databricks.com/blog/mosaicml-streamingdataset).
43
-
44
- `StreamingDataset` is compatible with any data type, including images, text, video, and multimodal data.
45
-
46
- With support for major cloud storage providers, and designed as a drop-in replacement for your PyTorch
47
- [`IterableDataset`](https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset) class,
48
- `StreamingDataset` seamlessly integrates into your existing training workflows.
49
-
50
- ## [Examples Repo](https://github.com/mosaicml/examples)
51
-
52
- This repo contains reference examples for training ML models quickly and to high accuracy. It's designed to be easily forked and modified.
53
- It currently features the following examples:
54
-
55
- - [ResNet-50 + ImageNet](https://github.com/mosaicml/examples#resnet-50--imagenet)
56
- - [DeeplabV3 + ADE20k](https://github.com/mosaicml/examples#deeplabv3--ade20k)
57
- - [GPT / Large Language Models](https://github.com/mosaicml/examples#large-language-models-llms)
58
- - [BERT](https://github.com/mosaicml/examples#bert)