DeepSeek launched a free, open-source large language model in late December, claiming it was developed in just two months at a cost of under $6 million.
But I feel like that will just lead to more training with the same (or more) hardware with a more efficient model. Bitcoin mining didn’t slow down only because it got harder. However I don’t know enough about the training process. I assume more efficient use of the hardware would allow for larger models to be trained on the same hardware and training data?
But I feel like that will just lead to more training with the same (or more) hardware with a more efficient model. Bitcoin mining didn’t slow down only because it got harder. However I don’t know enough about the training process. I assume more efficient use of the hardware would allow for larger models to be trained on the same hardware and training data?