Omni-OpenAI¶
OpenAI provider adapters for the omni-* ecosystem, wrapping the official openai-go SDK.
Overview¶
Omni-OpenAI provides unified adapters for integrating OpenAI's APIs with the omni-* ecosystem:
- OmniLLM - Chat completions provider for omnillm-core
- OmniVoice - STT/TTS providers for omnivoice-core
Features¶
OmniLLM Provider¶
- Chat completions (GPT-4, GPT-4o, GPT-3.5 Turbo, etc.)
- Streaming responses
- Tool/function calling
- Vision (image inputs)
- JSON mode
- Auto-registration with omnillm-core registry
OmniVoice Providers¶
- STT (Speech-to-Text): Whisper transcription with word and segment timestamps
- TTS (Text-to-Speech): Audio synthesis with 13 voices
Installation¶
Quick Start¶
import (
core "github.com/plexusone/omnillm-core"
_ "github.com/plexusone/omni-openai/omnillm"
)
provider, _ := core.NewProvider("openai", core.ProviderConfig{
APIKey: os.Getenv("OPENAI_API_KEY"),
})
resp, _ := provider.CreateChatCompletion(ctx, &core.ChatCompletionRequest{
Model: "gpt-4o",
Messages: []core.Message{{Role: core.RoleUser, Content: "Hello!"}},
})
Configuration¶
Set the OPENAI_API_KEY environment variable:
Package Structure¶
omni-openai/
├── openai.go # Direct OpenAI client (STT/TTS)
├── omnillm/ # OmniLLM provider adapter
│ ├── adapter.go
│ └── doc.go
└── omnivoice/ # OmniVoice provider adapters
├── stt.go
└── tts.go
License¶
MIT License