yipao.LLM.vertex.VertexAiLLM

class yipao.LLM.vertex.VertexAiLLM(model_name: str, credential_path: str, project: str, config: Dict[str, Any], safety_settings: Dict[str, Any], **kwargs)[source]

A specialized class derived from CustomLLM for interfacing with Google’s Vertex AI generative models. Manages authentication, initialization, and interaction with the Vertex AI platform.

model

An instance of the generative model from Vertex AI.

Type:

GenerativeModel

config

Configuration settings for model generation.

Type:

Dict[str, Any]

safety_settings

Settings to ensure safe content generation.

Type:

Dict[str, Any]

total

Tracks the total input and output tokens processed.

Type:

Dict[str, int]

__init__(model_name: str, credential_path: str, project: str, config: Dict[str, Any], safety_settings: Dict[str, Any], **kwargs) None[source]

Initializes the Vertex AI LLM model with necessary configurations and credentials.

Parameters:
  • model_name (str) – The name of the model to be used.

  • credential_path (str) – Path to the service account JSON file.

  • project (str) – Google Cloud project identifier.

  • config (Dict[str, Any]) – Additional configuration parameters for the model.

  • safety_settings (Dict[str, Any]) – Safety settings to control content generation.

  • kwargs – Additional keyword arguments passed to vertexai.init.

Methods

__init__(model_name, credential_path, ...)

Initializes the Vertex AI LLM model with necessary configurations and credentials.

count_tokens(prompt)

Counts the number of tokens in a prompt.

invoke(prompt)

Generates content based on the input prompt while tracking token usage.

monitor()

Provides the total count of input and output tokens processed.

validate_prompt(prompt)

Validates the prompt to ensure it is not empty.