Dataset Viewer
Full Screen Viewer
Full Screen
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata
Warning:
The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other
Vector store of embeddings for CFA Level 1 Curriculum
This is a faiss vector store created with Sentence Transformer embeddings using LangChain . Use it for similarity search, question answering or anything else that leverages embeddings! ๐
Creating these embeddings can take a while so here's a convenient, downloadable one ๐ค
How to use
Download data Load to use with LangChain
pip install -qqq langchain sentence_transformers faiss-cpu huggingface_hub
import os
from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings
from langchain.vectorstores.faiss import FAISS
from huggingface_hub import snapshot_download
download the vectorstore for the book you want
cache_dir="cfa_level_1_cache"
vectorstore = snapshot_download(repo_id="nickmuchi/CFA_Level_1_Text_Embeddings",
repo_type="dataset",
revision="main",
allow_patterns=f"books/{book}/*", # to download only the one book
cache_dir=cache_dir,
)
get path to the vectorstore
folder that you just downloaded
we'll look inside the cache_dir
for the folder we want
target_dir = f"cfa/cfa_level_1"
Walk through the directory tree recursively
for root, dirs, files in os.walk(cache_dir):
# Check if the target directory is in the list of directories
if target_dir in dirs:
# Get the full path of the target directory
target_path = os.path.join(root, target_dir)
load embeddings
this is what was used to create embeddings for the text
embed_instruction = "Represent the financial paragraph for document retrieval: "
query_instruction = "Represent the question for retrieving supporting documents: "
model_sbert = "sentence-transformers/all-mpnet-base-v2"
sbert_emb = HuggingFaceEmbeddings(model_name=model_sbert)
model_instr = "hkunlp/instructor-large"
instruct_emb = HuggingFaceInstructEmbeddings(model_name=model_instr,
embed_instruction=embed_instruction,
query_instruction=query_instruction)
# load vector store to use with langchain
docsearch = FAISS.load_local(folder_path=target_path, embeddings=sbert_emb)
# similarity search
question = "How do you hedge the interest rate risk of an MBS?"
search = docsearch.similarity_search(question, k=4)
for item in search:
print(item.page_content)
print(f"From page: {item.metadata['page']}")
print("---")
- Downloads last month
- 47