ThirdBrAIn.tech

Tag: model-optimization

6 items with this tag.

  • Apr 05, 2025

    https://i.ytimg.com/vi/VVgZCquMKXE/hqdefault.jpg

    AI in Action From the Vault, 4.25 workflows

    • artificial-intelligence
    • prompt-engineering
    • workflows
    • machine-learning
    • AI-techniques
    • language-models
    • system-design
    • AI-productivity
    • model-optimization
    • AI-development
    • YT/2025/M04
    • YT/2025/W14
  • Apr 01, 2025

    https://i.ytimg.com/vi/JV3pL1_mn2M/hqdefault.jpg

    AI Engineering in 76 Minutes (Complete Course)

    • AI-engineering
    • foundation-models
    • neural-networks
    • transformer-architecture
    • machine-learning
    • prompt-engineering
    • model-evaluation
    • data-quality
    • AI-training
    • model-optimization
    • YT/2025/M04
    • YT/2025/W14
  • Dec 28, 2024

    https://i.ytimg.com/vi/K75j8MkwgJ0/hqdefault.jpg

    Optimize Your AI - Quantization Explained

    • AI
    • quantization
    • model-optimization
    • deep-learning
    • memory-reduction
    • neural-networks
    • AI-models
    • context-quantization
    • hardware-efficiency
    • model-compression
    • YT/2024/M12
    • YT/2024/W52
  • Aug 30, 2024

    https://i.ytimg.com/vi/3UQ7GY9hNwk/hqdefault.jpg

    Fine Tune a model with MLX for Ollama

    • fine-tuning
    • machine-learning
    • AI-models
    • MLX
    • Ollama
    • dataset-creation
    • model-optimization
    • Hugging-Face
    • natural-language-processing
    • AI-training
    • YT/2024/M08
    • YT/2024/W35
  • Aug 05, 2024

    https://i.ytimg.com/vi/Zv-eadNi1Uk/hqdefault.jpg

    Calculate Required VRAM and Best LLM Quant for a GPU

    • GPU
    • VRAM
    • quantization
    • machine-learning
    • AI
    • model-optimization
    • NVIDIA
    • deep-learning
    • hardware-requirements
    • script
    • YT/2024/M08
    • YT/2024/W32
  • Feb 17, 2024

    https://i.ytimg.com/vi/tIRmTsns4pw/hqdefault.jpg

    How to Run 70B and 120B LLMs Locally - 2 bit LLMs

    • large-language-models
    • LLMs
    • model-quantization
    • 2-bit-models
    • local-AI-deployment
    • model-optimization
    • Hugging-Face
    • AI-efficiency
    • GPU-CPU-AI
    • model-loading
    • YT/2024/M02
    • YT/2024/W07

Created with Quartz v4.5.0 © 2025 for

  • GitHub
  • Discord Community
  • Obsidian