Thoughts
Most designers I know hate sites like 99designs and fiverr. These are marketplaces for design work—things like logos, clothing, websites, and more. Customers in search of a design can start contests, designers submit their work, and the customer selects a design. 99designs boasts 444,000 happy customers with an average 4.7 out of 5 stars, $200M paid to designers, and a 60 day money back guarantee.
Some contests are open to the public; others are by invitation only. As a customer, you can view the bids from the designers, or you can select a sized package. 30 concepts for a tattoo design costs $299. You can start contests for car wraps, PowerPoint templates, mascots, and infographics.
And most surprising and interesting, and a little mind-bending, is that you can get 30 designers to give you designs for a mobile phone app design for $599.
Since the inception of 99designs, designers have railed against the site. They've argued that the design work isn't good, that the site breeds cookie-cutter or lazy design, that it lowers the wage of designers to an abysmal level, that it demands the designer create "spec work" (speculative work that won't necessarily be paid for), and, most fundamentally, that it undercuts the value of design as a profession entirely.
The value that's lost for the designer appears to be gained by the customer: 99designs is generating 60MM in annual revenue, probably because customers feel that the price is right and the product is good. There's a need for this service—apparently a big one—and this style of creative work isn't going away.
But the truly fascinating part of 99designs is that it's generating a massive data-set that can fuel the machine learning necessary to put even the 99designs designers out of a job.
AI is the stuff of science fiction: machines doing what humans do. A big part of AI research has been understanding how people solve problems, so that computers can solve problems, too. There are well-defined problems and activities, like a game of chess, that have specific and finite rulesets. And there are ill-defined problems and activities, like software development. And there are lots of things in-between. Some researchers consider that we can claim "true AI" when computers can solve those ill-defined problems for us.
Machine learning is a way for computers to evolve their abilities: to get better and better at playing chess by playing a lot of chess, until they can beat the grand master. Simplistically, a way to jumpstart machine learning is to "train" the computer. Get an email about Viagra and flag it as spam; the computer has a data point. Twenty or thirty people flag it as spam, and it knows that specific email is spam. A hundred or a thousand people flag a hundred or a thousand different Viagra emails as spam, and now it knows the characteristics of a Viagra email well enough to predict how to handle a new one. At a Google-like scale, it can get pretty good: in 2015, Google said it's only getting .05% false positives on spam, and it's only gotten better since.
In 2004, Luis von Ahn, a researcher at Carnegie Mellon, created the idea of "games with a purpose." In one game, two participants are randomly connected via the internet, and shown the same image: a bird, for example. They can't communicate with each other, but they can enter words that describe the image. One types "bird", the other "blue bird", and eventually, they both type "blue jay." Because it matches, they "win", and that word is considered a descriptive label of the image. Game after game, those labels are reinforced or rejected, and when they reach a defined "good label threshold", we can consider the label accurate. Now we know, with a high degree of confidence, that the bird is a Blue Jay. In von Ahn's seminal paper on this work, he describes that "Rather than developing a complicated algorithm, we have shown that it's conceivable that a large-scale problem can be solved with a method that uses people playing on the Web."
Google experimented with another version of the game that took the labeling further. Instead of labeling the image, game players labeled particular parts of the image, indicated by a square area on top of the picture. Now, we can start to understand just where that Blue Jay is in the picture, and learn that the jay is in a tree, which is in a field of yellow flowers. It's a game, with a purpose: helping a computer "know" about a picture.
99designs is a game with a purpose. Over the last ten years, designers and customers have been training the system, and if 99designs has good telemetry and behavioral logging, they have a really, really well trained machine. I'm not making a new or prescient observation: Patrick Llewellyn, the CEO of 99designs, described to TechCrunch that they are sitting on "all of the data accumulated from nearly a million customer interactions." To point out the most obvious, 99designs knows which design solutions sell, and while that's a pretty poor judge of "good design" for most designers, it's an excellent one for most customers. The customer has purchased a design artifact—a logo, a tattoo design, or an entire mobile phone application design—and if they didn't leverage the 60 day money back guarantee, it's safe to say that they consider it a good design and a good value for their money.
It's no secret that Google (and Alphabet) is all-in on machine learning. It's a fundamental part of self-driving cars, smart home automation, mapping, and targeted advertising. The more data it has, the more it can do. It only makes sense for Google to extend their massive and growing data set to include data about applied creativity like design.
I've always been skeptical of and concerned by the proliferation of technology. I see how it has historically undercut a variety of valuable human qualities and behavior: assembly-line production in the industrial revolution reduced the self-worth of employees as they became repetitive task human robots, the Moore's-law curve of computing power has created an unthinkable amount of useless technology trash that ends up in landfills, and mobile phones have led to all sorts of "alone together" behavior, with indications of correlations (and in some cases, causality) to depression and anxiety. I don't want design to become a victim of the constant march of technology, but it's probably inevitable—at least when we think about design as a noun. I bet a cup of coffee that machines will soon be able to conceive of chairs, toasters, websites and buildings that won't win any awards from designers, but will sell really, really well. This won't necessarily be because the computers are "smart." It will be because we trained them well to give consumers what they want.
But there's a positive take on this inevitability. If we abandon design-as-a-noun, where designers make things that are sold, and leave it to the machines, it starts to matter where we aim our design-as-a-verb. Computers are getting better and better at solving ill-defined problems. But I don't think we're close to training them to work on wicked problems: the large hairball problems of poverty and access to education and nutrition and behavior. "Designing" in these contexts is a different interpretation of the word than most of us are used to. This is part of the design thinking panacea: the application of a way of thinking about problems, often in the context of government.
We have a lot of experimental and observational research data explaining why people turn to drugs, or what happens when we cut funding for public schools, or why people eat food that's bad for them. But we don't have a lot of solutions to these problems, and if we have evidence that a solution works, it often runs counter to an ideology that may support implementation of that solution. In the US, political lobbying pushes the FDA to set guidelines for food that may not be the objectively "best solution" for improving our health. Our different interpretations of The Constitution leads to conflict over everything from gun laws to abortion to zoning. We have lots and lots and lots of data about what to do to help society, and we often do the opposite, even in the face of data. And that's going to really confuse the machines.
This is another of many reasons that designers need to evolve their "making" skillset. We have plenty of discourse in the context of these problems. Those of us who can make things can apply those skills in a context that doesn't necessary benefit from machine learning. The thing we make in order to positively impact nutrition is based on inference, and there won't be lots and lots of examples for us to draw from. A potential solution in this context needs to evolve from originality and an emotional understanding of people and culture. It needs to toe a line of politics. It needs to be convincing. It needs to compromise. Maybe the machines will get there eventually. I think it's going to take a very long time.
I suppose there's an irony to the predicted future of sites like 99designs or fiverr; the people training that machine will slowly be out of a job, just like Google-funded-Lyft's drivers who are training self-driving cars will be out of a job, too. And as those designers look to recast their ability to solve problems in unique ways, perhaps they will refocus their efforts on problems that are more worth their time.