Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
ethzanalytics
/
gpt-j-8bit-daily_dialogues
like
4
Follow
Analytics Club at ETH Zürich
31
Text Generation
Transformers
PyTorch
daily_dialog
gptj
8bit
8-bit precision
quantization
compression
chatbot
dialogue
conversation
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
483faff
gpt-j-8bit-daily_dialogues
1 contributor
History:
7 commits
pszemraj
add tokenizer
483faff
almost 2 years ago
.gitattributes
1.53 kB
add sharded model
almost 2 years ago
README.md
600 Bytes
Update README.md
almost 2 years ago
added_tokens.json
4.33 kB
add tokenizer
almost 2 years ago
config.json
1.04 kB
add sharded model
almost 2 years ago
merges.txt
456 kB
add tokenizer
almost 2 years ago
pytorch_model-00001-of-00004.bin
pickle
Detected Pickle imports (4)
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.ByteStorage"
,
"collections.OrderedDict"
What is a pickle import?
1.94 GB
LFS
add sharded model
almost 2 years ago
pytorch_model-00002-of-00004.bin
pickle
Detected Pickle imports (4)
"torch.ByteStorage"
,
"torch.FloatStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
2 GB
LFS
add sharded model
almost 2 years ago
pytorch_model-00003-of-00004.bin
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
,
"torch.ByteStorage"
What is a pickle import?
1.94 GB
LFS
add sharded model
almost 2 years ago
pytorch_model-00004-of-00004.bin
pickle
Detected Pickle imports (4)
"torch.FloatStorage"
,
"torch.ByteStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"collections.OrderedDict"
What is a pickle import?
343 MB
LFS
add sharded model
almost 2 years ago
pytorch_model.bin.index.json
82.1 kB
LFS
add sharded model
almost 2 years ago
special_tokens_map.json
438 Bytes
add tokenizer
almost 2 years ago
tokenizer.json
2.14 MB
add tokenizer
almost 2 years ago
tokenizer_config.json
763 Bytes
add tokenizer
almost 2 years ago
vocab.json
798 kB
add tokenizer
almost 2 years ago