Thoughts
Design started as an aesthetic activity. Industrial designers were frequently called upon to "do the plastics" and wrap components in a pretty shell. But even before the design thinking madness took over, designers have been advancing their purview and perspective, and exploring more and more design problems and opportunities.
The focus on digital experiences became a way for designers to consider larger and larger parts of a problem space, like interactions that occur over time. As problems themselves became more and more confusing, some designers have learned to adopt a "systems stance" to design. But what does it mean? And can AI do it too?
Scientist and Author Donella Meadows dedicated her career to advancing systems thinking in environmental science. Her consideration of systems found in nature has strong parallels to the systems we find in technological culture. She explains that the components of systems are interconnected in a way as to produce their own behavior patterns over time. That's pretty compelling, and somewhat controversial: it means that there's no direct causality in a system. We aren't hapless puppets, but we can't take all of the credit or the blame for things like successful product innovations or failed startups. As she describes, "Something about statements like these is deeply unsettling... we have been taught to analyze, to use our rational ability, to trace direct paths from cause to effect, to look at things in small and understandable pieces, to solve problems by acting on or controlling the world around us." (pdf) But these systems have behavior of their own, outside of any of our individual actions or reactions.
The "system-ness," at least in the human-made world, emerges and hides as we zoom in and out.
Consider a coffee maker—a simple $17.99 Mr. Coffee from Amazon. It boasts an indicator light, a filter basket, a "pause n' serve" sensor, a window to see the water level, a clock, an alarm, a delay-brew feature, and cord storage. Almost incidentally, it also makes coffee.
A product perspective views the coffee maker as a discrete, finite, understandable object. We can see it and touch it, and there it is. Those —the light, the basket, the clock—are features of the product, and through a product lens, that's all there is to the thing.
A systems perspective allows us to zoom around the concept of coffee-ness. We can:
A system is hard to think of. It's hard to hold the whole thing in your mind at once, because there is no single whole thing. Systems thinking is to think around the problem, and in many modern-day design problems, this thinking around the problem is a core skill.
This is a skill that can be taught and learned. I learned it during my undergraduate education, training to be an industrial designer, in a class called How Things Work.
One of our first tasks was to buy a coffee maker and take it apart. We looked at the components, drew them, and diagrammed the relationships between parts. For many students, this is the first time they've ever peered below the plastic to really understand how things work, and why they work. This simple exercise acts as a basis for systems thinking, because it identifies that a product is more than what it appears to be. When the machine is plugged in, coffee doesn't flow out immediately, and there's a practical reason for that. A diagram becomes a way of communicating the behavior of this simple system.
It's a local system, and is fairly well-contained. Variability in the system comes from repeated use, from people doing weird things like forgetting to clean the carafe, and from anomalies in electricity to the house.
Now, students are asked to put it in a broader system. Just like we can take apart the coffee maker, we can take apart the market around it. Zoom around the political qualities of the coffee maker. Why is it only $17.99? It's all about volume and cheap materials. By drawing on previous knowledge and making some inferential leaps, it's fairly likely that the manufacturing plant cut a deal with someone working on Mr. Coffee based on quantity produced, and also based on personal relationships. Maybe the buyer and seller met and discussed the deal, went out for dinner, and made some informal agreements. Why didn't the company put the deal up for competitive bid? Maybe it's based on trust: a previous positive experience implies that future experiences will be positive, too.
Consider the production and distribution part of coffee-maker-making. A coffee maker has a metal base. The production facility has sheets of metal. They buy them from a sheet metal distributor, who buys raw bauxite from a production company, who mines it in Africa. Everything is going great, and then the President of the United States—under pressure from the American coffee-maker-making-industry—imposes tariffs on Chinese imports of Mr. Coffee coffee makers. How did we get in a situation where a coffee-maker-making-group can lobby the President? Maybe the group actually represents a broader set of consumer appliances, giving them more leverage. Or maybe, just like the manufacturing deal, someone golfed with the President years ago and an informal deal was made.
Prices go up; sales go down. Mr. Coffee stops producing so many units, requiring less sheet metal, resulting in less need for raw materials, and so-on. And these situations aren't static. The tariff happens, it goes away; one country creates sanctions while another removes them. Businesses discontinue product lines and add new ones. Consumers change their coffee drinking habits. The system ebbs and flows.
Systems thinking is provoked by asking open-ended questions. How did it get there? How does it work? Why does it work? This seems like a search for root-cause, but it's not: it's an exploration around the concept of the system. At every single question-node, there are infinite directions to follow the thread, into topics of engineering, culture, economics, and on and on. The threads are interconnected and the system exhibits its own behavior. And so the other part of systems-thinking is answering the open-ended questions, with confidence, and with large "abductive" leaps. Abductive reasoning is the ability to combine many things that are related, but not the same, and propose somewhat-causal links between those things.
These leaps aren't logical in a traditional sense: they aren't inductive or deductive. But they're real, and they're how people work through ill-structured problems.
So how does AI do with abductive reasoning and systems thinking?
As of my last update in September 2021, Mr. Coffee is a well-known brand that produces affordable and basic drip coffee makers. There are several reasons why Mr. Coffee machines tend to be relatively inexpensive compared to some other coffee maker brands:
It's important to note that pricing can vary depending on the specific model and features. While Mr. Coffee machines are generally more budget-friendly, there are higher-end models available from the brand and other manufacturers that come with additional features and a higher price tag.
That's all accurate; those are all reasons that the product is inexpensive. But two things are missing, and are missing in nearly each answer from the AI-driven response, in any topic.
The first things missing are reasons that are illogical or fuzzy. It's quite likely that unfair, irrational, unexpected, contradictory, and multi-dimensional decisions are being made somewhere in the pricing process that lead to the cost. But AI isn't (yet?) well-trained on these fuzzy items, and an AI-driven response will have a low sense of confidence in telling us that "The price is a loss-leader based not on a brand strategy but on a vendetta between two CEOs of major corporations." We expect the response to be accurate, and so even if it's trained on the fuzzy stuff, it hedges on the right stuff. And the right stuff, above, is pretty thin; these are things a 9th grader might answer based on some casual, quick secondary research.
The second thing missing is the path from question to answer. It isn't the answer that's important: it's the pathway there. The answer above is a one-and-done response. It's efficient, minimizing the winding and subjective relationships between ideas. In its efficiency, it produces no new knowledge. But systems thinking is knowledge-generative. As a result of this thinking, new ideas are formed and produced. And those new ideas, and new knowledge, then feed a creative process, not just an informative process. Design is about making new things. The content coming back from GPT-3.5 is not new.
Interestingly, visual relationships are brought to life quite effectively with MidJourney. There's just something fundamentally different about casual, informal semantic relationships that trips up the computer.
Design strategy is steeped in systems. A new product or service naturally exists in the context of a business, a market, a brand, customers, and employees. Design is uniquely positioned to describe the system, because systems are best described in relationships. It's about sketching the system components, analyzing the variability in the system, and describing the flow of information, or knowledge, or ideas. Right now, AI doesn't offer us systems thinking: it simply offers us facts. It's perhaps short-sighted to think it won't get there, but it will take training data that's driven by the process of living, not the output of living, and that's really, really hard data to find. It's private, unique, unrepeatable, and emotional. It's human.