• Dran@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Inference is dirt cheap in comparison. Hundreds to thousands of concurrent users can be served by hardware costing in the high-thousands to low-ten-thousands.

    Training those same foundational models is weeks to months of time on tens to hundreds of millions worth of hardware.