ThirdBrAIn.tech

Tag: model-quantization

5 items with this tag.

  • Apr 08, 2025

    https://i.ytimg.com/vi/wkXT9SjcS4Y/hqdefault.jpg

    The Great AI Migration (smart entrepreneurs are ditching cloud AI and going local)

    • AI-migration
    • local-AI
    • open-source-models
    • cloud-AI
    • AI-hardware
    • model-quantization
    • AI-efficiency
    • open-source-frameworks
    • AI-entrepreneurs
    • machine-learning
    • YT/2025/M04
    • YT/2025/W15
  • Oct 23, 2024

    https://i.ytimg.com/vi/KSltC4TXxZg/hqdefault.jpg

    Run LLAMA 3.1 405b on 8GB Vram

    • large-language-models
    • AI-optimization
    • GPU-memory
    • model-quantization
    • LLaMa-3-1
    • AI-hardware
    • inference-speed
    • model-compression
    • limited-hardware
    • AI-tools
    • YT/2024/M10
    • YT/2024/W43
  • Feb 29, 2024

    https://i.ytimg.com/vi/nP5pztB6wPU/hqdefault.jpg

    1-Bit LLM SHOCKS the Entire LLM Industry !

    • language-models
    • large-language-models
    • AI-efficiency
    • model-quantization
    • energy-efficient-AI
    • neural-networks
    • AI-research
    • model-scaling
    • AI-innovations
    • cost-effective-AI
    • YT/2024/M02
    • YT/2024/W09
  • Feb 17, 2024

    https://i.ytimg.com/vi/tIRmTsns4pw/hqdefault.jpg

    How to Run 70B and 120B LLMs Locally - 2 bit LLMs

    • large-language-models
    • LLMs
    • model-quantization
    • 2-bit-models
    • local-AI-deployment
    • model-optimization
    • Hugging-Face
    • AI-efficiency
    • GPU-CPU-AI
    • model-loading
    • YT/2024/M02
    • YT/2024/W07
  • Jan 03, 2024

    https://i.ytimg.com/vi/yxWUHDfix_c/hqdefault.jpg

    The Best Tiny LLMs

    • tiny-language-models
    • small-LLMs
    • fine-tuning
    • function-calling
    • model-quantization
    • local-AI-inference
    • high-throughput-API
    • performance-comparison
    • deep-seek-coder
    • model-size
    • YT/2024/M01
    • YT/2024/W01

Created with Quartz v4.5.0 © 2025 for

  • GitHub
  • Discord Community
  • Obsidian