We benchmarked text-to-SQL performance on real schemas to measure natural-language to SQL fidelity and schema reasoning. This is for analytics assistants and simplified DB interfaces where the model must parse intent and the database structure.

Takeaways

GLM-4.5 ranks 95 in our runs, making it a great alternative if you want competitive Text-to-SQL without defaulting to the usual suspects.

Most models perform strongly on Text-to-SQL, with a tight cluster of high scores. Many open-weight options sit near the top, so you can choose based on latency, cost, or deployment constraints. Examples include GPT-OSS-120B and GPT-OSS-20B at 94, plus Mistral Large EU also at 94.

Full details and the task page here: https://opper.ai/tasks/sql/

If you’re running local or hybrid, which model gives you the most reliable SQL on your schemas, and how are you validating it?


💬 Discussion r/LocalLLM (21 points, 3 commentaires) 🔗 Source