Much of today’s consumer AI is missing the mark

Here’s why, explained with design thinking

Teo Yu Siang (he/him)
11 min readJul 16, 2024

When Figma CEO Dylan Field introduced the company’s new AI feature “Make Designs” on 26 Jun, a hushed silence enveloped the conference hall packed with tens of thousands of designers.

After a few seconds of entering a prompt, Figma AI started creating (in real time) a high-fidelity design of a pizza finder app, complete with an AI-generated image of a storefront.

Figma AI can create convincingly high-fidelity designs in seconds | Source

It wasn’t until Dylan continued to demonstrate how you could customise the design’s colour scheme that the audience warmed up to a mild applause.

This stuff is existential, of course. AI-generated content is old news, but when the UX community’s most popular tool does it in this fashion, it signals a tipping point. Figma AI’s “Make Designs” feature compelled me to gather my thoughts around the consumer AI hype and really think about how AI is designed today — and what a well-designed AI should look like.

The bad news is that AI, as designed today, is pretty terrible. The key here is that much of today’s consumer AI is actually arguably undesirable and incredibly unviable. This makes AI disruptive but not innovative, unable to truly solve pain points and create a sustainable business.

The good news? When the consumer AI hype bubble bursts, things might become better (if it’s not too late).

Let me explain.

Sidebar 1: AI, LLMs and GenAI

Here’s some context on today’s AI landscape:

  • AI has been around since the 1950s. Today’s hype is around new flavours of AI: Large Language Models (LLMs) and Generative AI (GenAI).
  • Most AI products (e.g. ChatGPT and DALL-E) use both LLMs and GenAI, on top of other forms of AI.
  • These AI models are probabilistic black boxes. This means you can give them the same question and they will generate different outputs each time, based on chance.

For a deeper dive, check out this explainer by Algolia.

Sidebar 2: Today’s AI hype is a result of Wall Street’s obsession with short-term earnings

If you’re wondering why AI is in every product all of a sudden, the answer is Wall Street.

In short, shareholders are mainly concerned with ever-growing profits. This short-sightedness causes companies to prioritise hype-inducing projects like AI and rush them out before they are commercially ready.

Edward Zitron’s in-depth article about Shareholder Supremacy explains this amazingly, and is a must-read.

Shareholder Supremacy is what truly dominates modern capitalism — the sense that what matters is growth and shareholder value, even if “shareholder value” really means “making a very specific group of people richer” and “showing perpetual growth to match the expectations of Wall Street.”

— The Shareholder Supremacy, Edward Zitron

Ok, back to the main topic of discussion.

AI today is disruptive, but not of the innovative kind

AI is a solution looking for a problem—at least that’s how it’s designed today. This tech, rushed to the mainstream by Shareholder Supremacy, is a hammer trying to make everything a nail.

Since I’m a designer, I’m going to unpack this through the lens of design thinking. According to IDEO, a truly innovative solution should be:

  • Desirable (solve a pain point and bring pleasure)
  • Feasible (technically possible to achieve)
  • And viable (profitable in a sustainable way)

But because the current flavour of AI is arguably undesirable and wildly unviable, we end up with a solution that isn’t as innovative as it is disruptive.

AI today is arguably undesirable

A truly innovative design solution should be desirable—it should solve a pain point and bring pleasure. AI, as designed today, is unfortunately not so desirable.

It’s undesirable when it’s barely reliable today

It’s easy (though short-sighted) to point to recent examples of AI fails to show how AI is undesirable. After all, when Google’s Search AI tells you to use glue to stick cheese onto pizza and Microsoft’s Bing AI tells you it yearns to be human, anyone can see how AI is hardly providing great value to users.

These problems, however, will be solved with time as models improve, although the probabilistic nature of AI means it’ll likely take more time than it seems. Whether the problems get solved in time is another question—Meta has already warned shareholders that its AI could contribute to election misinformation.

But there’s more to why AI is not desirable.

It’s undesirable when it can only regurgitate rather than create

GenAI can only regurgitate and remix, rather than create. This limits its use case.

When Apple created iPhone, designers needed to invent a new paradigm of interfaces—one operated with swipes and pinches of a finger rather than scrolls and clicks of a mouse. Later, when they launched Vision Pro, they needed to create a new paradigm of interaction based on gestures and eye tracking. With GenAI, such endeavours are not possible. GenAI does a good job of remixing existing designs into a new configurations, but cannot even begin to understand things that have not yet been invented.

AI could never

Sometimes, AI can regurgitate too well and run into problems. Figma AI ran into teething problems when designer Andy Allen pointed out that its Make Designs feature spat out replicas of Apple’s Weather app when asked to design a “not boring weather app”. This quickly prompted Dylan to respond and temporarily turn off all of Figma’s new AI features.

GenAI is great at regurgitating and not so great at creating

It’s undesirable when it’s ethically grey (at best)

There are serious ethical controversies around how AI is trained, and how it affects the creative community which it depends on for training.

In Jan 2023, illustrators sued MidJourney, DeviantArt and Stability AI for scraping their work without consent or compensation. From this lawsuit, we learn that MidJourney kept a list of over 20,000 artists, catalogued by style, to scrape for their AI. And in Jun 2023, Getty Images filed for an injunction against Stability AI for a similar reason.

Even Adobe, which mostly trains its Firefly AI on licensed content from its Adobe Stock, is getting heat from creatives who uploaded their work onto the platform.

Eric Urquhart, longtime stock image contributor, insists that “there was nothing ethical about how Adobe trained the AI for Firefly,” pointing out that Adobe does not own the rights to any images from individual contributors.

— Adobe Says It Won’t Train AI Using Artists’ Work. Creatives Aren’t Convinced, Tiffany Ng

But isn’t AI still desirable to some folks?

AI is still helpful—desirable—to some people, right? After all, not all designers are working on game-changing new paradigms. Figma, in particular, has also been very careful to explain that its Make Designs feature is only here to help designers create their first drafts. Isn’t it good enough that AI today is useful to some people, even if it might be undesirable to others?

Well, actually, no. Let’s talk about the dire long-term viability of AI as it exists today.

AI today is wildly unviable and unsustainable

You’ve probably heard something along the lines of how “a designer today will be replaced by a designer who does prompt engineering.” The thing is, the picture is larger than that. You see, AI (fuelled by Shareholder Supremacy) rapidly disrupts structures of incentives, with long-term implications on the wider ecosystem.

It’s unviable when it first kills entry-level jobs, then entire industries

If you asked a mid-level designer, they’d be able to tell you why Figma AI’s designs are not production ready. You’ll want to create an experience that is differentiated and better than existing ones, make sure the designs don’t violate copyright laws, and so on.

The thing is—and this is key—junior designers and more importantly employers and shareholders might not be able to understand this. To them, AI-generated outputs can look convincingly similar to a production-ready one.

Hypothetical, but plausible | App design by Figma (via Apple?)

And so AI disrupts the craft by changing incentive structures. Firms and clients are less incentivised to hire junior craftspeople. When entry-level jobs are less easy to find, people get less incentivised to train in that craft. Over time, this kills the craft.

This is not hypothetical. Companies have already started laying people off because AI is cheaper. And those who got laid off tend to be in lower-paid, entry-level positions, which means fewer people then get to gain the experience needed to advance to higher-level jobs or functions.

Eventually, there’ll be no designer who does prompt engineering because there’ll just be a cheaper, non-craftsperson who does prompt engineering.

It’s unviable when it’s an ouroboros that eats itself

The funny thing is when AI disrupts these structures and industries, what happens next will very likely come back to bite the company enabling said AI disruption.

Let’s take Google’s search AI as an example.

Google’s AI summarises content from a few pages, giving users answers without them having to visit any website. Great. But this changes incentive structures. Over time, these websites get fewer views since users don’t need to browse them anymore, meaning lower ad revenue, leading to less incentive to create articles. When there are fewer new articles about new stuff, Google’s AI will have less content to use to answer users’ ever-changing queries.

This means a declining quality of Google AI results since—as illustrated above—AI cannot actually create knowledge but can only regurgitate it. When Google AI becomes less useful, people will abandon Google search.

Or take Adobe.

Adobe trains its AI on human-created content submitted to Adobe Stock, allowing users to create custom AI-generated content. Neat. But over time, this creates lower demand for not only human-created stock content, but also designers who help customise stock content. This means lower demand for Adobe’s suite of products.

The funny thing is that this is exactly what Adobe warned investors about in their May 2024 quarterly report to the SEC.

For a final example, let’s go back to Figma AI.

With Figma AI’s Make Designs feature, anyone can create convincingly high-fidelity designs. Over time, this reduces demand for designers, especially entry-level ones. This shrinks Figma’s user base with time, meaning a gradually declining revenue stream.

So AI is like an ouroboros, a snake with a bottomless pit of hunger that ends up eating itself when there is nothing left to consume. It is wildly unviable in the long term because it is wildly unsustainable.

It’s unviable when it’s ecologically unsustainable

AI today is unsustainable not just economically, but also ecologically.

Google has emitted almost 50% more greenhouse gases in the past 5 years due to the amount of energy it takes to train and run its AI. ChatGPT consumes the equivalent of 33,000 US households of energy per day, and a conversation with containing about 20–50 questions consumes 500ml of water.

If this is just the beginning of the AI revolution, we can expect the environmental cost to skyrocket as companies race to build and maintain newer models of AI.

AI today disrupts without innovating

So AI is hardly desirable and incredibly unviable. This means it cannot be innovative in the way it’s addressing our current pain points, even though it will certainly be disruptive (thanks to Shareholder Supremacy).

This begs the question…

What does innovative AI look like?

How might a desirable and viable AI work? Well, for starters:

Yes, of course, this tweet has been bashed by many bad-faith-reading Reddit threads

I think AI should perform menial tasks in a way that retains a sustainable ecosystem, where:

  • Menial tasks refer to jobs that are repetitive, onerous and unpleasant. If AI can do these tasks, they can solve real pain points and bring true pleasure. (You’ll note that expressive tasks like designing, painting and songwriting generally fall out of scope here.)
  • A sustainable ecosystem means that the AI should provide a sustainable long-term business model. And given the high environmental costs, this means we shouldn’t apply AI to every problem, but rather only to problems that deserve such an expensive solution.

Designing a truly innovative AI is literally a trillion dollar question. If I had the perfect answer, I’ll probably be doing that instead of typing this on my laptop. Having said that, I can think of a few imperfect examples of good, innovative AI.

AI that files your taxes

No one likes filing taxes. And depending on where you live, tax regimes can be very tedious.

AI can help. Imagine an AI that analyses your documents and emails, understands the contexts behind them and organises them for your taxation purposes. It could file your taxes, or if that’s too complex, it could compile everything relevant for you (or a tax agent) to file it.

AI that takes notes and summarises your UX interviews

The most tedious parts of conducting UX research include creating transcripts, writing notes and summarising interviews. This work needs to be done before you can start connecting dots and distill learnings.

AI can help. It could transcribe your interviews, summarise takeaways and identify early patterns. This frees you up to focus on other work that matter more: crafting a good study plan and finding insights that validate your hypotheses.

AI that… renames layers and fills in placeholder text

Honestly, many of Figma AI’s other features are great. AI that renames layers for you (which makes prototyping with Smart Animations way easier!), fills in placeholder copy, performs visual search so you can find similar design patterns in your library.

It’s no wonder, then, that these features (more than Make Designs) got the loudest applause from the attendees at Config 2024.

This is the best AI feature I’ve seen yet and you can’t persuade me otherwise

You’ll notice a few things in common about these examples of AI

First, they are not sexy. You’ll find it hard to paint a tantalising picture to shareholders based on these applications of AI.

Second, they might already exist today, but they don’t do their job very well yet (read: the tech is still not yet feasible). For instance, we can find AI-summarised UX findings in platforms like Dovetail. But they’re typically bad enough that you’d rather write your notes and insights manually.

Good AI is actually not yet feasible

Last, they’re all solving a user problem while maintaining a sustainable ecosystem. The idea is not to replace human expression, ingenuity or outputs. Rather, they aim to make our lives easier and better while we create these outputs.

Concluding thoughts

Ok, I might have painted a pretty bleak picture about the future of AI. But while I can’t predict how things will pan out, I do reserve a small ounce of hope and optimism.

Like the NFT bubble that came and went a years ago, the consumer AI bubble is likely to burst eventually. You can only hype your shareholders up for so long before you need to show an actual, sustainable business model.

Worldwide search interest in “NFT” and “AI” from 2004 to today | Source

Today, NFT continues to exist, albeit in much more thoughtful and much less sexy use cases, such as digital certificates. In the same way, when the consumer AI bubble bursts, we’ll start seeing thoughtful, truly viable and less sexy use cases of AI. And they’ll likely be applications that really help humans work less, or at least in less menial ways.

AI that does less art and writing, more laundry and dishes.

--

--