xAI / Grok
Chat with Grok models via xAI API. Supports Grok-3, Grok-3-mini, vision, and more
Category: development Source: xai-org/grokWhat Is This?
Overview
xAI Grok provides access to Grok language models developed by xAI through a simple API integration. It covers chat completions that generate responses using Grok-3 and Grok-3-mini variants for text-based queries, vision capabilities that process images alongside text prompts for multimodal understanding, conversational context management that maintains message history across multiple turns, streaming response handling that delivers tokens incrementally for real-time user feedback, and custom system prompts that configure model behavior and personality for specific use cases. The skill helps developers integrate Grok's distinctive personality and knowledge into applications, making it straightforward to swap or supplement existing model integrations without restructuring core application logic or rewriting significant portions of existing code.
Who Should Use This
This skill serves developers building AI applications needing Grok's unique capabilities, teams evaluating multiple language model providers for feature comparison, and researchers experimenting with diverse model architectures and training approaches. It is also well-suited for engineers building resilient systems that require multiple model provider options to ensure continuity and flexibility.
Why Use It?
Problems It Solves
Applications locked into single model providers face vendor dependency and cannot adapt when better models emerge. Building custom integrations for each language model API requires duplicating authentication, error handling, and response parsing logic across projects. Grok models offer distinctive training data and unique architecture approaches that may perform better on specific domains. Applications need fallback options when primary model providers experience downtime or rate limiting.
Core Highlights
Chat completion engine generates responses using Grok-3 and Grok-3-mini models. Vision processor handles image inputs alongside text for multimodal queries. Context manager maintains conversation history across multiple turns. Streaming handler delivers tokens incrementally for responsive user experiences.
How to Use It?
Basic Usage
import os
import requests
api_key = os.environ['XAI_API_KEY']
response = requests.post(
'https://api.x.ai/v1/chat/completions',
headers={
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
},
json={
'model': 'grok-3',
'messages': [
{'role': 'user', 'content': 'Explain quantum computing'}
]
}
)
result = response.json()
print(result['choices'][0]['message']['content'])
Real-World Examples
import base64
with open('diagram.png', 'rb') as f:
img_b64 = base64.b64encode(f.read()).decode()
vision_response = requests.post(
'https://api.x.ai/v1/chat/completions',
headers={'Authorization': f'Bearer {api_key}'},
json={
'model': 'grok-3',
'messages': [
{
'role': 'user',
'content': [
{'type': 'text', 'text': 'Describe this diagram'},
{'type': 'image_url', 'image_url': {'url': f'data:image/png;base64,{img_b64}'}}
]
}
]
}
)
print(vision_response.json()['choices'][0]['message']['content'])
stream_resp = requests.post(
'https://api.x.ai/v1/chat/completions',
headers={'Authorization': f'Bearer {api_key}'},
json={'model': 'grok-3-mini', 'messages': [{'role': 'user', 'content': 'Write a poem'}], 'stream': True},
stream=True
)
for line in stream_resp.iter_lines():
if line:
print(line.decode())
Advanced Tips
Use Grok-3-mini for faster responses and lower costs when complex reasoning is not required, such as simple classification tasks or short-form content generation. Set custom system prompts to align model behavior with your application's tone and domain requirements. Implement model fallback logic that switches to Grok when primary providers are unavailable or rate limited, helping maintain uninterrupted service for end users.
When to Use It?
Use Cases
Build a chatbot that uses Grok for queries requiring up-to-date information and distinctive personality. Create a multi-model application that routes requests to the best model based on query type and complexity. Add image understanding to document processing workflows by sending diagrams and charts to Grok vision endpoints.
Related Topics
Language model APIs, AI chat completions, multimodal AI, vision language models, model orchestration, and API integration.
Important Notes
Requirements
A valid xAI API key configured in the XAI_API_KEY environment variable for authenticating requests. Python with the requests library or equivalent HTTP client for making API calls. Network access to xAI API endpoints for sending prompts and receiving completions.
Usage Recommendations
Do: use Grok-3-mini for latency-sensitive applications where response speed matters more than maximum capability. Implement proper error handling for API rate limits and transient network failures, including retry logic with exponential backoff for production reliability. Log model responses for debugging and quality monitoring in production deployments.
Don't: send sensitive personal information in prompts without reviewing xAI's data handling policies. Assume Grok models will always outperform other providers since optimal model choice varies by task and context. Skip input validation on user-provided content before forwarding to the API.
Limitations
API rate limits restrict request volume on free and lower tier plans. Model availability and capabilities may change as xAI updates its offerings over time. Vision capabilities depend on image quality and may not work reliably on low-resolution or heavily compressed inputs.