The pipe-able DevOps assistant.
Que is a CLI utility designed to act as a filter in a Unix pipeline. It ingests stdin (logs, error tracebacks, config files), sanitizes the data for security, enriches it with local system context, and queries an LLM (ChatGPT or Claude) to determine the root cause and suggest a fix. Perfect for analyzing errors in servers and CI/CD environments where you don't have easy access to AI-powered editors.
Que runs entirely locally. It scrubs secrets (API keys, PII) using Gitleaks rules before the request leaves your machine. Logs are stateless and not stored.
Universal installer (auto-detects platform):
curl -sSL https://raw.githubusercontent.com/njenia/que/main/install.sh | bashOr download and install the latest release for your platform:
Linux (amd64):
curl -L https://github.com/njenia/que/releases/latest/download/que-linux-amd64.tar.gz | tar -xz && sudo mv que /usr/local/bin/Linux (arm64):
curl -L https://github.com/njenia/que/releases/latest/download/que-linux-arm64.tar.gz | tar -xz && sudo mv que /usr/local/bin/macOS (amd64):
curl -L https://github.com/njenia/que/releases/latest/download/que-darwin-amd64.tar.gz | tar -xz && sudo mv que /usr/local/bin/macOS (arm64 / Apple Silicon):
curl -L https://github.com/njenia/que/releases/latest/download/que-darwin-arm64.tar.gz | tar -xz && sudo mv que /usr/local/bin/Windows:
# Download and extract
curl -L https://github.com/njenia/que/releases/latest/download/que-windows-amd64.zip -o que.zip
Expand-Archive que.zip
# Move que.exe to a directory in your PATH# Clone the repository
git clone https://github.com/njenia/que.git
cd que
# Build with Make (automatically detects version from git tags)
make build
# Or install directly
make installgo install github.com/njenia/que/cmd/que@latestNote: When building locally, use make build to automatically set the version from git tags. Building with go build directly will show version as "dev".
First, configure your API keys:
export QUE_CHATGPT_API_KEY="your-openai-api-key"
export QUE_CLAUDE_API_KEY="your-anthropic-api-key"
export QUE_DEFAULT_PROVIDER="openai" # Optional, defaults to openaiThen use que to analyze logs:
# Defaults to ChatGPT
cat server.log | que
# With specific provider
tail -n 50 error.log | que --provider claude
# Strict mode (verbose output showing what's being sent)
tail -n 50 error.log | que --provider claude --verbose-p, --provider string: LLM provider to use (openai, claude)-m, --model string: Specific model override (e.g., gpt-4-turbo)-v, --verbose: Show what data is being sent (including redaction)-i, --interactive: Enter interactive mode for follow-up questions--no-context: Skip environment context gathering--dry-run: Perform redaction and context gathering but do not call API
# Analyze error logs
cat error.log | que
# Use Claude with verbose output
tail -f app.log | que --provider claude --verbose
# Interactive mode - ask follow-up questions
cat error.log | que -i
# Dry run to see what would be sent
cat config.yaml | que --dry-run --verbose
# Skip context gathering
cat log.txt | que --no-context
# Interactive mode with specific provider
cat server.log | que --provider claude -iQue is perfect for automated environments where you need AI-powered log analysis without interactive editors:
GitHub Actions:
- name: Analyze deployment errors
if: failure()
run: |
cat deployment.log | que --no-context > analysis.txt
cat analysis.txtDocker/Kubernetes:
# Analyze container logs
kubectl logs pod-name | que --no-context
# Analyze Docker logs
docker logs container-name 2>&1 | queServer Monitoring:
# Analyze systemd service failures
journalctl -u my-service --since "1 hour ago" | que
# Analyze application errors from log files
tail -n 1000 /var/log/app/error.log | que --provider claudeAutomated Error Reporting:
# Send analysis to Slack/email
cat error.log | que --no-context | mail -s "Error Analysis" [email protected]Que follows a linear pipeline architecture:
- Ingestor: Reads from stdin (with buffer limits to prevent memory overflow)
- Enricher: Gathers non-sensitive metadata from the host environment
- Sanitizer: Redacts PII and secrets using gitleaks detection
- Advisor: Formats the payload, selects the provider, sends the request, and renders the response
When using the -i or --interactive flag, Que enters an interactive session after displaying the initial analysis. This allows you to:
- Ask follow-up questions about the log analysis
- Get clarification on the root cause or fix
- Request additional details or alternative solutions
- Have a conversation with the AI while maintaining full context of the original log
To exit interactive mode, type exit, quit, or q.
MIT