top of page
Brian Watkins

AI: An answer, not the answer


Caveat: I am not an expert in AI. My knowledge of it is limited and what you are about to read is my opinion. If you want deeper knowledge on the topic, learn from the experts.

I think AI has the potential to be an amazing tool. I see it being used in great ways now - I use my Google Home to ask questions, get information, hear the latest news, etc. From a business perspective, the ability to use analytics, workflow learning, etc., from AI is exciting.

However, it concerns me in 2 areas:

  1. How we use AI in conjunction with people - not in place of people.

  2. How AI is learning.

The power of AI will be in how we harness it to work WITH people, not instead of people. With all due respect to those who think the robots will take over someday, it won't happen in my lifetime. Computers are great at one thing, humans are great at another. The power will come in how those two things come together. Think of a current and rather pedestrian example. CRM systems are great at holding information regarding clients. People have learned to use CRMs to make better, more efficient connections with people. CRMs didn't revolutionize the industry, but it helped when used well by humans.

Focus on AI from a great manager perspective. The manager needs to be even less of an expert because we can use AI to guide the employee through the day-to-day tasks and even with decision making. AI will probably even be able to help with coaching and development. Where the manager will come in is focusing more on communications, expectations, development, coaching, and feedback. Using the tools available through AI. In other words, managers will become more people focused because they don't provide value in any other way.

Note: that has always been the case, but will become more transparent when AI really takes over.

My second concern is one that is more dangerous in my mind. Computer learning requires coding and the past. Just like we all learn, it requires information to be analyzed and conclusions to be developed. AI will be able to do it faster, better, and more accurately. But it is still relying on the data it gets. Which means, garbage in and garbage out.

What is the bad data I am worried about? Bias. As humans we have tons of biases that create bad information for us all the time. How do we ensure that bias doesn't creep into and get reinforced in AI?

Simple example. How do you decide what colleges are giving the best education and you should hire. After all, you don't have any real data on the person's results, so you need to use something. We tend to think Ivy League schools are great and if we looked at things like current success rates (from what schools successful leaders come from), it might reinforce that idea. But how many of those people are getting opportunities and advantages because of the college versus the college has prepared them for success? The bias may lead to further bias.

I believe that AI people know this and are figuring out ways to ensure this doesn't happen, but I haven't been convinced yet that we are there. We've spent centuries building up biases, and we still don't recognize many of them. It would be foolish to think they aren't going to show up in the AI.

2 views0 comments

Recent Posts

See All

Perfect Example of a Bad Measurement

Measurements are great - except when they are poorly created. What makes an effective measurement? It should be objective. There can be...

Comments


Post: Blog2_Post
bottom of page