π Llama 3 8B InstructΒΆ
engines.cross_provider.llama_3_8b_instruct
CrossProviderInferenceEngine
(
model="meta-llama/llama-3-8b-instruct",
provider="watsonx",
max_tokens=2048,
seed=42,
)
[source]Explanation about CrossProviderInferenceEngineΒΆ
Inference engine capable of dynamically switching between multiple providers APIs.
This class extends the InferenceEngine and OpenAiInferenceEngineParamsMixin to enable seamless integration with various API providers. The supported APIs are specified in
_supported_apis
, allowing users to interact with multiple models from different sources. Theprovider_model_map
dictionary maps each API to specific model identifiers, enabling automatic configuration based on user requests.Current _supported_apis = [βwatsonxβ, βtogether-aiβ, βopen-aiβ, βawsβ, βollamaβ, βbamβ, βwatsonx-sdkβ, βritsβ, βvertex-aiβ]
- Args:
- provider (Optional):
Specifies the current API in use. Must be one of the literals in _supported_apis.
- provider_model_map (Dict[_supported_apis, Dict[str, str]]):
mapping each supported API to a corresponding model identifier string. This mapping allows consistent access to models across different API backends.
- provider_specific_args:
(Optional[Dict[str, Dict[str,str]]]) Args specific to a provider for example provider_specific_args={βwatsonxβ: {βmax_requests_per_secondβ: 4}}
Read more about catalog usage here.