Running an LLM on your laptop

When you use a chatbot with companies like Google or OpenAI, your chats and data can be used to train models that spit out bits to others. As always, there is value in owning your data and controlling who gets to see and use it. For MIT Technology Review, Grace Huckins outlines the reasons to run LLMs locally and how to get started.

Training may present particular privacy risks because of the ways that models internalize, and often recapitulate, their training data. Many people trust LLMs with deeply personal conversations—but if models are trained on that data, those conversations might not be nearly as private as users think, according to some experts.

“Some of your personal stories may be cooked into some of the models, and eventually be spit out in bits and bytes somewhere to other people,” says Giada Pistilli, principal ethicist at the company Hugging Face, which runs a huge library of freely downloadable LLMs and other AI resources.

For Pistilli, opting for local models as opposed to online chatbots has implications beyond privacy. “Technology means power,” she says. “And so who[ever] owns the technology also owns the power.” States, organizations, and even individuals might be motivated to disrupt the concentration of AI power in the hands of just a few companies by running their own local models.

FlowingData Delivered to Your Inbox

Weekly Digest