Generative AI and the Risk of Careful Consulting

Last year, I wrote a blog post about Kinetic West’s intentions for 2023. I wrote about our efforts to build a culture of trust and to break through the fear of failure both within our organization and through our client work. In that post, I wrote about the idea of “Careful Consulting” – the idea that many of the tools that we use in client service to keep from “messing up” our projects (e.g., attentive to clients, socializing and refining recommendations, best practice case studies) have the potential to inhibit truly creative and innovative consulting work.

Kinetic West at our 2023 Spring retreat on Vashon Island, WA.

Around that same time, Generative AI started to seriously enter the public discourse, as tools like Midjourney, ChatGPT and DALL-E became widely available. In February of this year, the Kinetic West team came together to talk about the implications of these Generative AI tools for our field - social impact consulting. The conversation was open and exploratory, but I think the team left with three takeaways:

  • A healthy skepticism about whether these tools would truly increase consultants’ ability to do the type of bold and innovative work our clients need;

  • An understanding of how tools could make life easier for consultants and help them break through some of the challenges that we all face when staring at the blank page;

  • The need for a critical eye on equity and the ethical use of Generative AI, like validating responses, mitigating bias, and avoiding plagiarism or infringing on copyrights.

We also concluded that we likely had at least a year or two to keep grappling with how and when to use Generative AI tools in our consulting work. Wow, were we naïve.

Since that time, I’ve been confronted with Generative AI's use in the world of management consulting nearly every day. A few examples:

  • Teammates or partners seeding brainstorming sessions with ChatGPT “to get the ball rolling”;

  • Clients sending ChatGPT recommendations to questions that we’ve been working on for months (or even years) as “food for thought”;

  • Reading blogs from very reputable consulting firms on the buzzy topics of the moment and being paranoid that they might have actually been written by Chat GPT;

  • Getting suggestions from friends about using AI tools to get over writer’s block (for this very blog post incidentally, which has been many, many months in the making!);

  • A partner of ours using heavily human-refined and edited Midjourney images on one of our big projects this summer which were received with both praise and skepticism.

So as you read what follows, know that I’m not coming to you from an ivory tower or fronting as holier-than-thou. I’m speaking as a fellow traveler who has used these tools in my work and is confronted daily with the temptation to use them more. Furthermore, I'm 100% prepared to be proven totally wrong on this blog post and make myself and our firm look hypocritical, given the real possibility that both the entire management consulting industry and KW will be leveraging generative AI tools in the near future. Chalk this up as my commitment to break through the fear of failure and take more risks. Sigh.

I believe that Generative AI tools are a dangerous force for the management consulting industry and our clients. There are several reasons to feel this way, including:

  • Ethical concerns around reusing the work of other consultants, artists, and designers without the ability to properly attribute or compensate them;

  • Equity concerns about the fact that AI recycles and continues to perpetuate the thinking of white-dominant institutions and consultants creating an echo-chamber that makes it harder for voices of color and other marginalized people to break through and influence policy and practice;

  • Labor market concerns related to AI eliminating jobs within our industries, especially entry level jobs where new professionals build their networks, gain experience, and learn.

We could write a blog about each of these topics but I'm going focus on how I think Generative AI tools can perpetuate already prevalent challenges in our industry which we call “careful consulting.”

As I shared above, “Careful Consulting” is the idea that many of the tools that we use in client service to keep us from “messing up” our projects, in practice, have the potential to inhibit truly creative and innovative consulting work.

The tools and techniques that I most associate with “careful consulting” are best practice reviews, case studies, landscape analyses, studying trends, interviews with thought leaders, literature reviews, and even excessively “socializing” recommendations with clients.

If you worked with me or KW over the years you might be thinking, “Wait! Doesn’t Marc put something like this in every scope?” and you’d be right! I’m not saying that we’re above using these techniques or even that they don’t have a place within great project work.

What I am saying is that these tools, definitionally, ground our work in the work of others and what has been done before, and that over-relying on “careful consulting” keeps us from innovating and pushing our clients into difficult places or finding solutions that depart from conventional thinking.

Today, it seems our communities and our clients are faced with intractable challenges like structural racial inequity, climate change, housing affordability and growing homelessness, and rising costs and declining outcomes in healthcare and education, to name a few. Working on these challenges year-over-year, our clients can feel stuck, like they’re running in circles.

And while there's much to learn from conventional thinking and how other cities, states, and countries do this work, for many of these problems, we're going to need to start with a blank page and test ideas and practices that haven’t been tried before. For those problems, careful consulting is just not going to cut it.

So what does this all have to do with Generative AI? AI tools learn by being trained on vast quantities of existing published and publicly-available information, ideas, and solutions, and processing that data to identify patterns and summarize themes

So if you ask ChatGPT a question like “How do you improve affordable housing and rising homelessness within a major American urban city?” or “What are best practices for improving Higher Education graduation rates within a rural community?” it will give you an answer and that answer will sound very convincing and professional because what it's doing is finding all the studies and all the previous ideas that have been published about this topic and summarizing an eloquent response based on those ideas. It’s a Careful Consulting Engine.


Asking ChatGPT “What are best practices for improving Higher Education graduation rates within a rural community?”

Asking ChatGPT “How do you improve affordable housing and rising homelessness within a major American urban city?”


For me, these recommendations are still nowhere near the level of depth, specificity, nuance, and practicality that KW and our peers hold ourselves to. But I’m not going to pretend that it isn’t extremely humbling to look at recommendations that an AI generated in 15 seconds and know that it could take a team of humans weeks, and sometimes months, of hard and thoughtful work to get to something that upon quick review will sound pretty similar.

What our clients need from us now isn’t something that “sounds right,” they need new thinking, new ideas, and things that are better than what's been done before. Sometimes those ideas take longer, don’t sound as polished, or are harder to understand and accept, but I'm confident that we're not going make any progress on the big problems that are facing our communities if we keep replicating and producing iterations of what's been done before.

DALL-E generated image based on prompt “a McKinsey consulting partner battling AI to make a slide deck, in the style of Gary Kasparov and Big Blue”

Further, I fear that over-leveraging Generative AI could cause a great flattening within management consulting. We will start to increasingly think and sound more and more alike which is exactly what we don’t need given our industry is already plagued by conformity and hasn’t moved fast enough to diversify and broaden the voices within it. If we go down this path, we as consultants would be doing our clients a tremendous disservice.

But I’m hoping that these tools will force us mere human-consultants to up our game and leverage the tools that we uniquely have - like creativity, empathy, and curiosity. We should be asking ourselves, “could ChatGPT write this” and if the answer is “yes”, we should keep working. If Generative AI can help us answer “what is”, then we must be called to imagine “what could be”.

If all we read is to be believed, ChatGPT will get better and better and may in short-order “out human” all of us. (I admit, I’d love to see an AI battling a McKinsey partner in a contest of who could “add more value” editing their teams slide deck). And look, if that day comes, maybe our clients will stop needing consultants for white papers and best practice analyses.

But even then, AI won’t do the hard, people-driven work like community engagement, building political consensus, change management, and improving organizational culture, which are all as essential to success as developing a sound recommendation. Knowing the “right” answer is just part of addressing the problem. Making decisions that take into account voices that are represented and not represented in the data that feed into AI, and implementing recommendations with authentic partnerships are what is really going to matter when we’re taking on the tough issues.

People have compared AI to the creation of atomic weapons – a new, world-changing technology that was released without full understanding or knowledge of all the risks it would create – and in a way, this feels like we’re beginning an arms race within the consulting industry, where firms will feel increasingly challenged to adopt these AI tools to keep pace or stay ahead of competitors. As in weapons proliferation, we need collective action as an industry to decide what rules will govern our work before it’s too late.

I’d love to hear what fellow practitioners have to say and I look forward to work together with you on this issue.

Marc