πŸ“„ Llama 3 70B InstructΒΆ

engines.cross_provider.llama_3_70b_instruct

CrossProviderInferenceEngine(
    model="meta-llama/llama-3-70b-instruct",
    provider="watsonx",
    max_tokens=2048,
    seed=42,
)
[source]

Explanation about CrossProviderInferenceEngineΒΆ

Inference engine capable of dynamically switching between multiple providers APIs.

This class extends the InferenceEngine and OpenAiInferenceEngineParamsMixin to enable seamless integration with various API providers. The supported APIs are specified in _supported_apis, allowing users to interact with multiple models from different sources. The provider_model_map dictionary maps each API to specific model identifiers, enabling automatic configuration based on user requests.

Current _supported_apis = [β€œwatsonx”, β€œtogether-ai”, β€œopen-ai”, β€œaws”, β€œollama”, β€œbam”, β€œwatsonx-sdk”, β€œrits”, β€œvertex-ai”]

Args:
provider (Optional):

Specifies the current API in use. Must be one of the literals in _supported_apis.

provider_model_map (Dict[_supported_apis, Dict[str, str]]):

mapping each supported API to a corresponding model identifier string. This mapping allows consistent access to models across different API backends.

provider_specific_args:

(Optional[Dict[str, Dict[str,str]]]) Args specific to a provider for example provider_specific_args={β€œwatsonx”: {β€œmax_requests_per_second”: 4}}

Read more about catalog usage here.