Choose optimal external AI models for code analysis, bug investigation, and architectural decisions. Use when consulting multiple LLMs via claudish, comparing model perspectives, or investigating complex Go/LSP/transpiler issues. Provides empirically validated model rankings (91/100 for MiniMax M2, 83/100 for Grok Code Fast) and proven consultation strategies based on real-world testing.
Choose optimal LLMs for code analysis based on performance, cost, and use case. Use parallel consultations for fast, reliable results.
Download the skill repository as a ZIP file