Artificial Intelligence: AI, LLMs, MCP, ABC, DEF, and other acronyms
This week, I reflect on Boise Code Camp and kick off a busy month diving into AI!
I'm a bit late to finishing my newsletter this week, after spending part of the weekend in Boise for Boise Code Camp.
I love Boise. The tech community is small, but gosh it's friendly. Some of my best friends I've made in my career are there and I love getting the chance to see them!
I'm kicking off a new and very busy month this month, and this may be the first one where I've already got every newsletter outlined without struggling for material. It turns out I have a lot of thoughts about this topic!
Highlights From the Week
Things I've been doing on the internet
Here are some things I've made on the internet this week!
Podcast: Overcommitted Ep. 5 | The ethics of AI for software engineers - we've decided to start incorporating interviews to our podcast, so it that's something you're interested in doing, please shoot me a message!
Illustration:
Things I've enjoyed on the internet this week
Got something you want me to read and feature in this newsletter? Send it to me at brittany@balancedengineer.com!
It has been a very light week for learning as I had been focused on cramming a bunch of work in before heading out of town this week and prepping for my talk at Boise Code Camp. But I did get through one article!
Article: Things you should never do part one by Joel on Software
This article was shared with me by Greg Trent after a lightning talk I did that may be my favorite thing to talk about of all time called "Refactor, don't rewrite". The article also shares the same sentiment!
Onto the content!
I have spent a ton of time recently perfecting the process of integrating AI tools into my workflows. That made it seem like the perfect time to do more learning on what AI actually is as well as to share some of the ways I have made myself more productive with AI!
And I truly have made myself more productive. I recently completed my semi-annual reflection and comparing the amount of stuff I have gotten done in the past 6 months versus the 6 months before that is truly mind-boggling. I can’t wait to share what I’ve learned with you!
Also as a caveat, I’m not trying to sell any AI stuff to anyone. Yes, I work for a company that is making AI tooling, and that is a tool that I use on the regular. But you can take any time I mention GitHub Copilot and just insert your coding tool of choice. Or buy a GitHub Copilot subscription and tell them I sent you. I don't get commission or anything but it may just get me an extra sparkle point 😎✨
May Theme: Artificial Intelligence
This month, we're diving into AI! And I want to start by demystifying how the current set of AI tools actually works, because it's not as magical or mysterious as you might think.
Artificial Intelligence (AI)
The field of AI is actually quite a bit larger than all of the AI tools available now. Most of the recent changes in AI come from LLMs, which I'll describe more below, but AI encompasses more than just those.
AI refers to a computer science field that is researching and designing systems that can approximate human intelligence.
The vast majority of AI products available still aren't capable of thinking like an actual human brain. They're usually really good at approximating human-level intelligence in one specific area.
Some folks in the field are working towards Artificial General Intelligence, or AGI, which is a benchmark by which a system could think at the level of an actual human. Maybe we will get there eventually, but I can't read the future. 🔮
When I get nervous about some of the crazy new capabilities of AI models I remind myself that it's really just a lot of math, like most things with computers boils down to.
Large Language Models (LLMs)
At its core, LLMs are like a really enthusiastic student who learns by studying lots of examples. Imagine teaching someone to recognize cats: You'd show them thousands of pictures of cats, pointing out the common features - whiskers, pointy ears, tails. After seeing enough examples, they'd start recognizing patterns and could identify new cats they'd never seen before.
That's essentially what we do when we "train" an LLM. We feed it massive amounts of data - text, images, code, whatever we want it to learn about - and the model finds patterns in that data. When you ask it to generate something new, it uses those learned patterns to create something that "fits" with what it learned.
The key insight is this: LLMs don't understand things the way humans do. They're incredibly good at pattern matching and statistical prediction, but they don't have true comprehension. When ChatGPT writes you a poem about cats, it's not thinking about fuzzy feelings and purring; it's calculating the statistically most likely words that should appear in a cat poem based on all the text it has seen.
Think of it like the word suggestions you see on your phone keyboard when typing up a message. It's just guessing what the next word is most likely to be, based on a TON of examples.
This matters because it helps us understand both the strengths and limitations of LLMs:
Strengths: Phenomenal at finding patterns in massive datasets, tireless, consistent
Limitations: No true understanding, can be confidently wrong, needs careful guidance
Understanding these basics helps us use LLMs more effectively and responsibly in our work.
Model Context Protocol (MCP)
This is another acronym that has been quite popular recently, so I thought it might be worth including in this quick and high-level primer.
MCP stands for Model Context Protocol. I break it down like this:
Model: It's a thing for LLMs or other AI models
Context: It provides more context to the model
Protocol: It's a standardized way for computers to communicate. Think REST or HTTP as protocols.
In this case, MCP servers can have API methods that allow a model to get more context in a standardized way if it needs it.
One example of how it could be used is in something like an online store. Say you are searching for a gift for someone in this store and need ideas for a shirt. You could ask the AI model of your choice about shirt options and the store could have a MCP server set up to query an API to get their current shirt inventory, which then makes it more likely that the model could provide you a better set of options.
The idea is that it allows providing more context to the model, which makes it more prepared to answer questions. One common theme we will be chatting about this month is how important context is. More context usually means better results from LLMs!
Have comments or questions about this newsletter? Or just want to be internet friends? Send me an email at brittany@balancedengineer.com or reach out on Bluesky or LinkedIn.
Thank you for subscribing. It would be incredibly helpful if you tell your friends about this newsletter if you like it! :)