The Cost of AI and How to Solve it Locally
A new tool comes out daily offering magical answers to every question thru AI. But at what cost? Samsung learned the hard way that anything you share, ChatGPT will share for the next person to find. Midjourney and other tools make it obvious that if you are not paying, you don't own the results. Now companies are limiting their employees from using the services or risk dismissal. So what are people to do? One option is to run the tools locally. But do you know how? In this session we will look at how Large Language Models work at a high level and the dangers posed by them. Then we will review a number of the great tools out there to run models locally. We will end the session looking at Ollama, a new LLM runner that is changing the game by applying Docker technologies to this new world.
Prerequisites
This is new for almost everyone so no experience necessary
Take Aways
- How do LLMs work
- How to run LLMs locally