Free Agency Vol. 3 - Who Is Going to Feed the Big Dogs?! You Are.
One one of my favorite pastimes is curating my personal YouTube feed. I have spent YEARS doing this and then seconds messing it up because I wanted to watch something else.
Where else on earth can you see a grown man dressed like Frozone, competing as a wide receiver in Columbus, Ohio!?
Nowhere but the ‘Tube. And that's the point.
LLM's Gotta Eat
A Large Language Model (LLM) is artificial intelligence designed to understand and generate human-like text. They are trained on lots of textual data to learn patterns, context, and nuances of language.
Large Language Models (LLMs)
There are usually 3 ways to think about what an LLM needs to be
- Model size: Larger models generally require more data.
- Task complexity: More complex tasks may need more diverse data.
- Desired performance: Higher accuracy often requires more training data.
Whenever you use ChatGPT, Claude, or Gemini, you are getting answers from an LLM. When you ask it to “edit this cover letter in a clear, concise voice, and check for weak language and missing commas” it is responding to you based on its training, and your specifications.
This is also why, sometimes it tells you that it cannot do research past a certain point in time because there is a limit to where its data has ended. That’s because it is confined to how it is trained, and what it is trained on.
To function optimally, LLMs need to be fed data, consistently, and constantly. For any tool that is using an LLM, the kind of data can change all the time. That is creating a challenge that is both legal and ethical: what does AI get trained on, and who gets compensated for it being trained.
I’ll leave the legal conversation to Carl, but my guess is we will see a lot of legislation coming out in the next three to five years to manage this kind of gray area. It’s getting spooky.
What it does mean, though, is that a different kind of market is emerging.