Blinded by the Light

    Sign-up to get this delivered to your inbox every week

    Welcome to Edition 30 of the Humanity Working Newsletter! This edition, what I think we are missing about AI, and how great leaders think differently.

    Do We Have AI all Wrong?

    In the last two weeks I have been talking to a lot of people about our AI Readiness program and specifically what’s different about it. I don’t want this newsletter to descend into a sales pitch (though of course you are welcome to sign up for the program here). Simply put, the main difference is that we built it the way we build all our programs, looking at what real people from different backgrounds do with the technology, and how those behaviors lead them to thrive (or struggle) over time.

    All this talk about the program got me thinking a bit deeper about what I think people are getting wrong about AI. So, at risk of looking really stupid in a few months, here are what I consider to be the top 5 misconceptions about AI. It is in approximate order, and done like any good Billboard chart, counting down to No.1.

    Misconception No.5 – It’s Not THAT Big a Deal

    AI wont just be a big deal, but a gigantic one. How do we know this? Well, the first part is obvious – in many ways it already is. And it got here HUGELY fast. Let’s compare it to something else I think almost everyone would consider a big deal – the Web. Browsing the web first became a thing for the masses in 1994 with the launch of Netscape Navigator. By 2000, over 50% of office workers used it regularly. So that’s around 6 years. None too shabby. Now let’s compare it with GenerativeAI. ChatGPT 3.5 launched on November 30th 2022, and that was the first fully available public release of GenerativeAI. The 50% mark? We got there in by August of 2023 – less than 9 months.

    That is spectacular, probably the fastest adoption of technology in history, but there are other things too. New AI-Native companies are being built in every part of the economy, not just tech centric sectors. AI is transforming agriculture and traditional manufacturing for example. Wherever there is a human doing something, there is now a company exploring how AI might do it more efficiently or better.

    And it will be an even bigger deal once we start to realize the impact of AI, robotics and automation all working together. This means that technology based systems are going to be able to do lots of things (including manual labor) cheaper and in some cases better than humans, who get tired, ill and intoxicated. This will fundamentally change how companies are structured, and what the role of humans is. This is societal level stuff, and it’s here. Now.

    Misconception No. 4 – It’s Just a Tool

    Most people think of tools as things that help us accomplish tasks more efficiently. GenerativeAI (which is the form or AI most of us are thinking about these days) does that. But it doesn’t behave in the way that tools normally do. Tools as we commonly understand them behave in predictable ways. They are deterministic. And if they aren’t working properly, its because we haven’t figured out how to use them properly.

    AI is different – certainly the “tools” we use like ChatGPT. The current incarnation of GenerativeAI is probabilistic in nature – in other words it guesses what we mean when we ask it a question, and it guesses again as to the answer. That means it makes mistakes in its own right.

    Is that important? In our research we’ve found it to be very important, because a lot of current training (think prompt engineering courses) come with the premise that everything can be solved by just using the tool more effectively. Figuring out how to ask the right questions will get you somewhere, but knowing that AI is different to tradititional tools will do much more. It will help you figure out how to structure your work, how to check AI’s work, how to structure your teams, and how to adapt as AI improves.

    Misconception No.3 – AI Only Has Upside (or Downside)

    I think people are getting this wrong in both directions, and it’s not surprising. There are basically two stories being told about AI – utopian and dystopian.

    On the utopian side is the aggressive marketing (at least tens of billions of dollars annually) coming from the tech sector. This world is one where AI eliminates entire disease classes, makes our roads and skies massively safer, finds innovative solutions to fight climate change, and helps us become the smartest generations of humans ever.

    On the dystopian side are science fiction authors, some Hollywood movies, concerned activists and some articles, like my recent one (I’m Sorry Dave, I’m Afraid I Can’t Do That). Here, AI is eliminating hundreds of millions of jobs, making humans largely irrelevant, sparking new categories of warfare, and creating a post truth world of artificially generated content riddled with bias.

    When it comes to AI, your vision of the future will probably depend on whether you are a glass half-empty or glass half-full kind of person. But I’m increasingly of the opinion that it will be a complex concoction of positives and negatives.

    Why? Well it’s basically embedded in the nature of the most recent technology revolutions. We often hear that luddites missed the automobile revolution, because they could only envisage “a faster horse”. But in many ways, a car IS just a faster horse. It’s a way of getting you from A-B more efficiently than the previous alternatives.

    The fundamentally different thing about recent technology advancements is a concept technologists call “platform” – the idea that the technology itself has many different uses. Some of them are good, some of them are bad, and it’s unpredictable (and often uncontrollable) whether good or bad will dominate.

    Misconception No. 2 – This is just a technology issue

    When it comes to AI, “I’ve got the CIO on this” is a common refrain from the rest of the C-Suite.

    But is your CIO really qualified to deal with the AI revolution?

    Only if you believe that this is primarily a technology shift. And technology companies don’t believe that, because they know that AI is rapidly changing how work (and life) gets done. CIOs aren’t the experts on that stuff.

    There are so many ways that this is more than just a technology change, but here are two that are huge – related to top line revenue and operational costs.

    First, top line revenue. In this new world, every company is going to fit into one of three categories:

    • AI Native – where your company makes money directly from AI based offerings
    • AI Enabled – where you use AI to make or keep you competitive
    • AI Resistant – where you differentiate yourself from being outside of the AI realm – the equivalent of a vinyl record store in in a digital music world

    As a company, you have to know which of those you are, and that is a business decision. Are you going to delegate that to your CIO?

    Now to operational costs. For a moment – forget that AI is a technology, and think of it in terms of what it actually does. The easiest way to do that is to imagine it as a team of humans, but a special kind of humans. They are cheap, indefatigable team members that never complain, never get tired, and never threaten to leave. And every week they get a little bit more efficient, and a little bit more accurate.

    Would you put the CIO solely in charge of those people? I hope the answer is obvious.

    So who SHOULD be thinking deeply about AI? In short, everyone, but certainly everyone with a C in front of their name. Here are a few examples:

    • CEOs should be figuring out what kind of a company they will run (AI native, AI enabled, or AI resistant) and preparing the company for it.
    • COOs should be figuring out how to restructure the company to support the vision of the organization – ensuring that the company operates smoothly as it transitions to AI-native, AI enabled or AI resistant.
    • CFOs should be figuring out how to redirect company financial resources, focusing on investing in technology, but also investing in preparing the company to maximize the business benefit and minimize the risks of AI.
    • CIOs should be figuring out the right technology to buy and build, and building the right level of technical expertise to support the CEOs vision.
    • CHROs should be focused on human development – ensuring that they have a series of highly engaged employees who are continuously learning and growing – adding value to their AI colleagues.

    Everyone matters in that picture, but perhaps the most important player is the CHRO, because when we all have cheap access to AI, the key differentiator will be how the humans use it and add value to it.

    This means, if you are in HR and are not paying attention to AI, you might be busy making yourself irrelevant rather than strategic. It’s time to start paying attention. Yesterday.

    Misconception No. 1: We know where this is going

    Hopefully I’ve convinced you that AI is a big deal, but knowing that is different to knowing where it is going.

    Do the tech companies know? Well if past is prologue, almost certainly not. In the early days of the web, there was a good understanding that people would want content, and that they would want to buy things more easily, but a strong consensus built around the idea of television evolving into a form of high definition semi-interactive TVs that focused on giving us ever more rich content, plus the opportunity to easily purchase things sparked by it.

    That vision of the world made a ton of sense, because we all knew that a small number of us (experts) made content that had artistic or mass appeal, and the rest of us consumed it. That had happened at least since Shakespeare. Innovation would come from improving HOW we consumed content and what commercial decisions we made as a result.

    But it didn’t really work like that. Facebook persuaded billions of us to go from passive consumers to creators, even presenting ourselves under our own names. And today, rich social networks worth trillions of dollars depend on a marketplace between creators and consumers, where many of us switch from one to another at a moment’s notice. One minute after I’ve finished writing this, I’ll click send, and it cost me nothing other than my time to do it.

    Predicting the full impact of technology has now got even harder, because of the points I made in misconception 3 about platforms. Platforms can be used for almost anything, and they are too complicated to allow any of us to predict with certainty what will happen.

    However, there is one thing I think we can know. The less educated we are as a population on this stuff, the more likely bad things are going to happen. This is not because tech companies want a dystopian future – of course they don’t. Teen suicides do not help Meta’s business model. It’s because tech companies are not going to be focused on the side effects of technology. So we need to be. Their job is to make the most amazing possible, possible. Our job is to make the most terrifying possible, impossible. And we do that by getting smart on the technology and acting as advocates for humanity.

    Recommendation

    A People-First Culture in Action

    “We exist to create positive lasting memories in everything we do. We solve problems. We make things work smoothly. We create opportunities.”

    What brand do think I’m describing? The answer is WD-40.

    On the surface (even if that surface is covered in rust) the answer might surprise you. But my conversation with Garry Ridge – the Chairman Emeritus of WD-40, helped me see not only why WD-40 employees see their company in that way, but also how they live those values. Garry has won huge numbers of awards for his leadership, and it is because he has a deep understanding of what it takes to build a people-first culture, combined with commitment to achieving it and a straightforward but compelling way of articulating it. I hope I wont make Garry blush by saying that listening to him reminded me of when I first heard Richard Feynman speak.

    As ever, you can listen to the episode by searching for Humanity Working on your favourite podcasting platform, or by viewing the video below.

    About Us

    If you are worried about how prepared your employees are for change – change in work environments (like hybrid and remote), business strategy, or even technology changes, you should talk to us. Just reach out us here and we can get a call scheduled.

    If you liked this newsletter, chances are someone else will too, so be sure to share it with them! Oh, and don’t forget to subscribe!

    Our CEO Paul also posts regularly on LinkedIn outside of this newsletter – you can make sure you miss nothing by following him on LinkedIn or X.

    You can also subscribe to our YouTube channel.