Task Failed Successfully
I have a friend who is a futurist. He helps businesses analyze possible outcomes and adjust decision-making accordingly. They call it “strategic foresight.”
The other night, at an event he hosted, I participated in a group activity whereby we considered four possible futures before dividing into small groups for 45 minutes of brainstorming and drafting presentations based on our strategic approach to navigating the future we were randomly assigned.
It being 2025, someone in my group suggested we run our challenge through ChatGPT. They snapped a picture of the handout — which outlined specific steps our mini think tank should undertake — and then uploaded it for the AI to take a crack at.
A minute later the guy1 said “look what ChatGPT came up with” and passed around his phone. The app had addressed the key points and by all accounts did a tremendous job addressing the challenge. Forty-two minutes left and our task was complete. One of my teammates wrote out the LLM’s key takeaways on poster board while we chatted loosely around the edges of the topic. We had fundamentally abdicated our responsibility and the machine did a fine job in our stead. Good even, viewed from just the right angle.
One by one, the other tables rose to give their presentations. Each passed with flying colors. Their explanations were sensible, creative, interesting. They’d clearly put their heads together and dived deep. They developed nuanced solutions to the challenges at hand. I learned from each of them.
Then it was our turn.
Our AI guy was the presenter. (I want to reiterate that he was a lovely human being who did a very nice job.) Our presentation sounded good and checked off all the required boxes. We did the thing we were instructed to do.
But when it was over, all I could think was, “Task failed successfully.”

Everything about what the AI spit out was totally appropriate. It had done the work and if we’d have been graded on the presentation we certainly would have passed. So, by most metrics, it was a rousing success.
But by the most important metrics, in my opinion, it was an abject failure.
Yes, the presentation seemed to make sense, but ultimately there was no there there. It was essentially meaningless. Just surface, nothing deep. Certainly nothing half as interesting as what the humans in the other groups had come up with. It was complete, and we had passed, but had we really done anything?
My gut insists no.
I honestly can’t remember what our takeaways were. This, I think, is a side effect of the fact that the AI’s key insights weren’t especially insightful, and that we didn’t meaningfully discuss the challenge at hand. We didn’t participate, so we didn’t learn shit.2
I’m guessing, sincerely, that the people who listened to our presentation didn’t learn much either. They probably felt like oh that was nice enough, it sounded right, but in the grand scheme of things there wasn’t much to hold onto. It’s like our presentation looked good from a distance if you squint a little. It would’ve been perfect to watch while laying on the couch thumbing through your phone.
That’s when it struck me: in no way whatsoever is this better. In fact I think it’s notably worse.
As the evening wrapped, I wrote in my notebook:

“Used AI. Done first. Sounded good. Least meaningful.”
I’m fairly certain everyone in the room would agree with my assessment. Acceptable, but in no way notable.
It simply didn’t work. And I have a theory as to why.
AI is not better. It’s just faster.
Continue reading at Art + Math. While you’re there, consider subscribing. It’s free!
