I like the idea of offline LLMs but in practice there's no way I'm wasting battery life on running a Language Model.
On a desktop too, I wonder if it's worth the additional stress and heat on my GPU as opposed to one somewhere in a datacenter which will cost me a few dollars per month, or a few cents per hour if I spin up the infra myself on demand.
Super useful for confidential / secret work though
In my experience, a lot of companies still have rules against using tools like Copilot due to security and copyright concerns, even though many software engineers just to ignore them.
This could be a way to satisfy both sides, although it only solves the issue of sending internal data to companies like OpenAI, it doesn't solve the "we might accidentally end up with somebody else's copyrighted code in our code base" issue.
On a desktop too, I wonder if it's worth the additional stress and heat on my GPU as opposed to one somewhere in a datacenter which will cost me a few dollars per month, or a few cents per hour if I spin up the infra myself on demand.
Super useful for confidential / secret work though