IS ARTIFICIAL INTELLIGENCE READY TO BECOME THE NEXT “KING” YET? - PART 1

avatar
Robert Massey
Clock icon
11 min read
Calendar icon
July 15, 2024

When I first flew a Blackhawk Helicopter, I barely imagined I’d be talking about artificial intelligence. In my twenty years serving the Department of Defense, I’ve seen technology evolve dramatically. I started as a Blackhawk Helicopter pilot and later served in roles like contracting officer and product manager. From managing billion-dollar contingency environment contracts to integrating intelligence mission data into systems at the National Geospatial Intelligence Agency, I’ve observed firsthand the shift from hardware being “king” to software being “king.” But today AI is emerging as a new frontier, surpassing former kings. For it to truly succeed, businesses must follow certain key principles and concepts. It’s all about building capable AI solutions that customers can trust now, and improve over time. However, the current reality for a majority of AI solutions is incredibly disappointing.

Why Is It Challenging to Deliver Trusted Solutions That Achieve Expected Outcomes?

AI solutions have been hyped so much that most customers think they will work off the shelf. Take a look at the advertising from Microsoft, Google, or Adobe. You see thirty-second clips of scenarios that work perfectly, generating incredible images, products, and documents for users. Much like the McDonald’s commercials, though, my Big Mac never looks like it does in the commercial. (And let’s be honest, my AI-generated images often look like they’ve been through a Picasso phase.)

Consequently, businesses quickly realize that these solutions are either trained on data that doesn’t make sense for their organization or require a great deal of customization, including Reinforcement Learning with Expert Feedback (RLEF) and real-time co-pilot training, to function effectively. Every time this happens, it sets the progress of AI back. Sadly, there are so many “grifters’’ in the AI space looking to make a quick buck, they don’t care if their solution ends up working. These solutions are like buying a ‘self-cleaning’ vacuum that still leaves a trail of crumbs behind – utterly disappointing.

Fortunately, I deal with incredibly sophisticated buyers in the public sector. Almost all of them smartly work on policies and doctrine that assist them in responsibly adopting AI. Some of these policies include concepts that guide the most effective AI solutions – AI integrity, RLEF and AI-Human partnerships.

It’s All About Human-AI Partnership

Having trusted AI ensures better products and shows how combining human expertise with machine intelligence achieves superior results at the speed of relevance. So the question is, do we train our AI models in the public sector to be more intelligent and effective? Absolutely. But will they make mistakes? You bet. I too occasionally forget where I parked my car, and I’m not even dealing with terabytes of data!

robot human handshake image

If your approach is about buying a product off the shelf to plug it in, and have it do someone’s job, you will be incredibly disappointed – there are plenty of “grifters” out there who will sell it to you though. Instead, what could work is implementing procedures to help you teach your AI model to learn at a rapid pace. This strategy enables your AI solution not only to assist in the decision-making process, but also to visually present how confident it is in its decisions. This is AI integrity.

Teach Your AI

Incorporating AI integrity into your AI solutions is the best way for your organization to approach partnership with AI. Teach it how to do the job better, just like you would teach a new employee.

For example, unless you’re trying to summarize a document, craft an email or make a funny picture of your cat, enterprise level GenAI won’t work out of the box. It’s meant to be much more than a new hire who has no idea where the coffee machine is. Your AI application needs to acquire and apply the knowledge that matters most to your organization.

I don’t care if it’s Gemini or ChatGPT; your AI solution won’t do anything of real value until you treat it like a human and train it to know your business. Provide your GenAI model with enough “training material” such as your data and your domain experts. That’s how humans and machines can interact to improve the services companies provide to their customers.

Combining AI technology with human expertise is like hiring AI colleagues to help you deliver superior results. Not only that, AI with the nuanced knowledge of human experts, can make us safer and more capable – capable enough to even develop a fully autonomous utility military helicopter. That’s exactly what happened in 2015.

Read on in Part 2..