Getting Started
Agnitra delivers an end-to-end optimization platform that pairs model tuning, telemetry, and usage-based billing in a single developer flow. Use the SDK or CLI to profile workloads, generate optimized TorchScript artifacts, and push structured usage records into your control plane or marketplace integrations. This quickstart walks through installing the SDK, running the CLI, and inspecting optimization telemetry.1. Install the SDK
Python (PyPI)
Install from PyPI (recommended):agnitra[openai]– OpenAI Responses API client bindings.agnitra[rl]– Stable Baselines3 + Gymnasium reinforcement learning add-ons.agnitra[nvml]– GPU telemetry via NVIDIA NVML.agnitra[marketplace]– Cloud marketplace adapters (boto3,httpx,google-auth).
JavaScript / TypeScript (npm)
Install the JavaScript SDK:2. Optimize a Model
--output to control the destination path.
From Python:
3. Explore the API Surface
- Launch the Starlette service:
agnitra-api --host 127.0.0.1 --port 8080 - POST graph + telemetry payloads to
/optimizefor automatic kernel suggestions. - Forward usage events to
/usageto dispatch marketplace billing records.
4. Next Steps
- Review the CLI and SDK guide for advanced flags, licensing, and offline mode.
- Plug telemetry into finance tooling via the Marketplace & Billing guide.
- Browse the Runtime Configuration reference to tailor Agnitra for your environment.