id
stringlengths
14
16
text
stringlengths
45
2.73k
source
stringlengths
49
114
6833f1fca458-70
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Replicate[source]# Wrapper around Replicate models. To use, you should have the replicate python package installed, and the environment variable REPLICATE_API_TOKEN set with your API token. You can find your token here: https://replicate.com/account The model param is required, but any other model parameters can also be passed in with the format input={model_param: value, …} Example Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-71
Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-72
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SagemakerEndpoint[source]# Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-73
Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field content_handler: langchain.llms.sagemaker_endpoint.ContentHandlerBase [Required]# The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. field credentials_profile_name: Optional[str] = None# The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> field endpoint_name: str = ''# The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. field model_kwargs: Optional[Dict] = None# Key word arguments to pass to the model. field region_name: str = ''# The aws region where the Sagemaker model is deployed, eg. us-west-2. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult#
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-74
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-75
Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedHuggingFaceLLM[source]# Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.).
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-76
like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Only supports text-generation and text2text-generation for now. Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Validators set_callback_manager Β» callback_manager set_verbose Β» verbose field device: int = 0# Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc. field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_id: str = 'gpt2'#
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-77
field model_id: str = 'gpt2'# Hugging Face model_id to load the model. field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field model_load_fn: Callable = <function _load_transformer># Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'transformers', 'torch']# Requirements to install on hardware to inference the model. field task: str = 'text-generation'# Hugging Face task (either β€œtext-generation” or β€œtext2text-generation”). __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model#
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-78
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β†’ langchain.llms.base.LLM# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-79
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.SelfHostedPipeline[source]# Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline():
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-80
import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Validators set_callback_manager Β» callback_manager set_verbose Β» verbose field hardware: Any = None# Remote hardware to send the inference function to.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-81
field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _generate_text># Inference function to send to the remote hardware. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_load_fn: Callable [Required]# Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'torch']# Requirements to install on hardware to inference the model. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-82
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. classmethod from_pipeline(pipeline: Any, hardware: Any, model_reqs: Optional[List[str]] = None, device: int = 0, **kwargs: Any) β†’ langchain.llms.base.LLM[source]# Init the SelfHostedPipeline from a pipeline object or string. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-83
Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.StochasticAI[source]# Wrapper around StochasticAI large language models. To use, you should have the environment variable STOCHASTICAI_API_KEY set with your API key. Example from langchain.llms import StochasticAI stochasticai = StochasticAI(api_url="") Validators build_extra Β» all fields set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field api_url: str = ''# Model name to use. field model_kwargs: Dict[str, Any] [Optional]# Holds any model parameters valid for create call not explicitly specified.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-84
Holds any model parameters valid for create call not explicitly specified. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-85
dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns.
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-86
Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Writer[source]# Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY set with your API key. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Validators set_callback_manager Β» callback_manager set_verbose Β» verbose validate_environment Β» all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field beam_search_diversity_rate: float = 1.0# Only applies to beam search, i.e. when the beam width is >1. A higher value encourages beam search to return a more diverse set of candidates field beam_width: Optional[int] = None# The number of concurrent candidates to keep track of during beam search field length: int = 256# The maximum number of tokens to generate in the completion. field length_pentaly: float = 1.0# Only applies to beam search, i.e. when the beam width is >1. Larger values penalize long candidates more heavily, thus preferring shorter candidates field logprobs: bool = False# Whether to return log probabilities. field model_id: str = 'palmyra-base'# Model name to use. field random_seed: int = 0# The model generates random results. Changing the random seed alone will produce a different response with similar characteristics. It is possible to reproduce results by fixing the random seed (assuming all other hyperparameters are also fixed) field repetition_penalty: float = 1.0# Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None#
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-87
Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None# Sequences when completion generation will stop field temperature: float = 1.0# What sampling temperature to use. field tokens_to_generate: int = 24# Max number of tokens to generate. field top_k: int = 1# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 1.0# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None) β†’ str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) β†’ Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) β†’ Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-88
Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) β†’ Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None) β†’ langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) β†’ int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) β†’ int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) β†’ unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
https://python.langchain.com/en/latest/reference/modules/llms.html
6833f1fca458-89
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) β†’ None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) β†’ None# Try to update ForwardRefs on fields based on this Model, globalns and localns. previous Writer next Chat Models By Harrison Chase Β© Copyright 2023, Harrison Chase. Last updated on Apr 21, 2023.
https://python.langchain.com/en/latest/reference/modules/llms.html