Qwen-3 (235B, 30B, 32B) + Free APIs + Cline & RooCode This AI Coding Model is PRETTY GOOD!



AI Summary

Summary of Quinn 3 Series Models

  • Overview: Quinn has launched the Quen 3 series, consisting of six models: two mixture of experts and four simpler models.

  • Key Models:

    • 235 Billion Parameter Model:
      • Active Parameters: 22 Billion
      • Best performance, achieves competitive results in coding and math benchmarks.
    • 30 Billion Parameter Model:
      • Can be run locally; active parameters of 3 billion.
    • 32 Billion General Model: Comparable performance to the 235B model but slightly underperforms.
  • Performance:

    • 235B model scores approximately 61.8 in Ader, on par with Sonnet in non-thinking tasks.
    • 30B mixture of experts model comparable to QWQ model.
    • Performance of smaller models (0.6B to 14B) is generally below par with higher-tier models.
  • API Availability: Free APIs for all models available on Open Router, including configuration options with Klein and Rukode.

  • User Experience: The models exhibit some operational issues and are not competitive with top models like Deepseek. The smaller models show improvements but still lag behind.

  • Recommendations: For better performance, consider using the 30B model locally or the 32B model for basic coding tasks.

  • Final Thoughts: While the Quinn 3 models have their benefits, they are still not at the level of leading models like Deepseek. Users are encouraged to share feedback and experiences with these new models.

For further testing details and optimal setups, refer to the video or visit associated links.