Skip to content

Authentication

The following models are available through the Oraicle API:

meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8

🖼️ Images 🌐 Multilingual 🔥 Hot
512,000 tokens

A 17B-parameter Llama 4 MoE model, natively multimodal, 512K context, instruction-tuned, strong coding, multilingual, and image+text reasoning. 🦙

deepseek-ai/DeepSeek-R1-0528

🧠 Reasoning 🔥 Hot 🌎 Global
64,000 tokens

Advanced open-source reasoning-optimized LLM, R1–0528, with unparalleled reasoning depth, long chain-of-thought, and high accuracy on complex tasks. 🧠

meta-llama/Llama-3.3-70B-Instruct

🔥 Hot 🌎 Global 📊 Premium
128,000 tokens

70B-parameter dense transformer, 128K context, instruction-tuned, strong reasoning, math, coding, and multilingual. 🌐

mistralai/Magistral-Small-2506

🧠 Reasoning 🌐 Multilingual ⚡ Efficient
128,000 tokens

24B-parameter instruction-tuned model, 128K context, step-by-step reasoning, high multilingual capability. 🧑‍⚖️

mistralai/Devstral-Small-2505

👩‍💻 Code 🛠️ Tools ⚡ Efficient
131,000 tokens

24B-parameter coding/agentic LLM, 131K context, optimized for software engineering and tool use. 🛠️

mistralai/Mistral-Large-Instruct-2411

🧠 Reasoning 🔥 Hot 🌎 Global
128,000 tokens

123B-parameter dense transformer, 128K context, SOTA reasoning, coding, and knowledge. 💎

Qwen/Qwen2.5-VL-32B-Instruct

🖼️ Images 🧮 Math 🌏 Asia
8,000 tokens

32B-parameter vision-language, 8K+ context, RL-tuned, math/structured output, bilingual. 🖼️

meta-llama/Llama-3.2-90B-Vision-Instruct

🖼️ Images 🌐 Multilingual 🔥 Hot
128,000 tokens

90B-parameter multimodal, 128K context, vision+text, SOTA image reasoning, multilingual. 🦙

openai/gpt-oss-120b

👩‍💻 Code ⚡ Efficient 🔥 Hot
128,000 tokens

OpenAI's open-weight 117B parameter Mixture-of-Experts model. 💥

openai/gpt-oss-20b

💰 Affordable ⚡ Efficient 🔥 Hot
128,000 tokens

OpenAI's compact 20B open-weight and efficient AI model offering strong language processing power with a lighter footprint. ⚡️