Proposed by: Aryan Garg

Forget Ford, Your LLM is the world's biggest contributor to the Depleting O-Zone

Since all the Rage has been surrounding AGI, RAG and other buzz words, I plan to take a deep dive of its repercussions. LLMs are known for being computationally expensive to train and inference, and while this may not be a problem for Earth 2024, it'll be a problem for Earth 2050. Companies are pouring Millions, if not billions in training AI that will power the generation. Where every day there seems to be a new AI from either Mixtral, DeepMind, OpenAI or Anthropic, someone has got to care about the Compute power that is needed to power these Multi Billion parameter models. I plan to talk about the environmental impact of training these models on the cloud, and how businesses don't need to spend millions on creating their own AI bot God, but can rather use the power of Finetuming for their needs. 9/10 times finetuning is less computationally expensive, it saves electricity costs for training and inference and will meet almost all your business needs. I plan to introduce open source models from HuggingFace and showcase AutoTrain from Huggingface for Finetuning (if a live demo is permitted).


TLDR;

-> AI is expensive

-> Training costs the world millions in Electricity and Computation Costs

-> Google spends Millions training Bard Gemini, and has spent millions more on inference.

-> Finetuning is the Key

-> You don't need to spend millions when a few Hundred * can do the trick

-> Hugging Face is kinda awesome for this

-> AutoTrain is the GOAT

-> Training preexisting models on new data will meet 95% of your business LLM requirements


* This will depend on the amount of data you need to train your model on as well as how long you train your data. I'll showcase a basic example that was trained for less than 10 USD.

Source code/Reference: https://blog.aryann.tech

Talk duration: