Apple Liquid Glass - I get it but I don’t think it will work - yet
15 Jun 2025 Table of Contents- What is Apple Liquid Glass?
- No but, what is it really?
- Sure, but will it work?
- But I think it might work later
- References and Asides
Yet. At least in the short term. This title is slightly click bait, but my thoughts are more scattered and boring here so I need to at least make it seam controversial to hold you through this article. Although, calling it out probably defeated the purpose.
Herewith my thoughts.
What is Apple Liquid Glass?
It’s a new design system/CI really. Apple calls it a new “material”. This is their language to describe a fundamental aspect of it, but in reality, it’s a new design system.
In their words it allows engineers to “embrace design principles of Apple platforms to create beautiful interfaces that establish hierarchy, create harmony, and maintain consistency across devices and platforms”.
Apple could have called it “plastic” but I suppose it doesn’t fit the pretension they’re going for. Apple be apple-ing I guess.
No but, what is it really?
Apple (and a lot of other companies) is seeing a revolution in how content is being interacted with.
In the good old days, we all had buttons. Physical ones. Buttons everywhere. Blackberry was the epitome of buttons on buttons sporting a 60% keyboard that - especially as someone with stubby fingers - were a nightmare to press until you get weirdly good at pressing the right tiny button on instinct (#UselessSkillAlert).
In the previous world where physical buttons were the only method of interaction, the content was always front and centre. In a word where bigger, better and more powerful screens have killed physical buttons, screen space has to be sacrificed to keep the device usable.
Essentially, Apple Designers are trying to get rid of the need to sacrifice expensive screen real estate by using what we use to prevent the need to have the lights on in the day - adding a clear material to let in the view. This as opposed to increasing the overall size of the screen with which I believe they’ve come to a conclusion that making bigger screens is optimising in the wrong direction. Or over-optimising without much return.
Sure, but will it work?
I don’t think it will be well received in the short term. My empirical evidence is the release of AirPods and removing the headphone jack being super controversial and, in the case of AirPods, outright mocked and people would be ashamed/embarrassed to like it in some cases. This is obviously taking for granted that the execution of “Liquid Glass” is done well and that the ergonomics are not compromised like what happened with the Magic Mouse. I distinctly remember getting that “leek meme” from colleagues when I “rocked” my first AirPods and I daily drive a Magic Mouse and have a deep disdain for it (I can’t find a better left handed mouse and so the touch pad-like features make up for this shitty ergonomics imo - if you know of a good left-handed mouse please contact me, you will change my life forever).
My experience using Apple products w.r.t. it’s design is that - at it’s best - the product is either such a pleasure to use that all flaws are forgiven or - at it’s worst - the utility of using it outweighs the terrible UX decisions. “Liquid Glass” from that perspective doesn’t feel far off from their philosophies.
But I think it might work later
Because we’ll supposedly have AI.
Yes, seriously.
I’m not just tenuously trying to link something to AI for the sake of it, trust me.
The age of (let’s call it) Natural Interaction with digital content requires content being front and center. I foresee “review” being a more common interaction then “operate”. Lets imagine a world where the kinks in AI are ironed out - what then will be the main thing on the screen? The content. What would be the controls? Natural Interaction mechanisms like voice, gestures and touch (but not in a way that recreates a physical button). We’re already seeing this with Agentic AI and Prompting right? The artifact you’re working with gains “intelligence”[2] in a sense that it can tell you things about itself and can do things you command it to. Asking a “document” questions rather then manually highlighting or copying and pasting text into ChatGPT and asking it to summarize for example. Or telling your image to animate itself. The goal of Apple (and many other companies) is to seamlessly integrate AI into the process. As someone now helping to build AI into our business processes, I can personally agree with this strategy and feel that this is a superior way to do it from a “business value” perspective.
Sounds a bit sci-fi but that seems to be the direction of travel and we’re in a world where machines can more easily have autonomy to execute operations already, they’re just not that good at the big and long stuff in my opinion and experience. In this new world, the company that figures out how to get the content to “live” on the screen in a way that the controls are secondary and unobtrusive would definitely have an easier time with the way the industry is going.
Maximizing value is tending towards “enriching the interaction with content while simultaneously removing the number of operations a human has to do to achieve the same result” - which was previously a hard tradeoff. Now, AI is making this possible seemingly without compromising either (with varying results imo).
As someone that loves the aesthetics of Apple and has bought into the design language and philosophy - I can only hope they don’t fuck it up.
- FIN -
References and Asides
[1] - Images generated by Gemini by giving it a paragraph and asking it to generate an image from that with “Use a manga style and make it dramatic. Avoid creating faces and avoid text generation.”
[2] - “Intelligence” is a strong word, but it’s the word Apple is using so I’m gonna go with it for now. I don’t believe this is the right word personally, but I’m not gonna get into it. I choose peace.