Two designers who work in artificial intelligence discuss the tension between creativity and AI – when is AI enabling and when is it disabling? AI is often thought of as a way to improve productivity but for creatives, productivity also includes time for reflection and serendipity. How do you integrate those intangible human qualities into digital tools?

The following transcript has been lightly edited and condensed for ease of reading.


David Kirkpatrick: —please join me onstage. Okay, so this is a—we haven’t, until now, really had a session that was really right down the center about AI, and I think considering a lot of the dialogue at this conference about the problems that we face in tech and the need to bring humanity into the equation more centrally, this is a great way to have our one AI session, because these two people are deeply committed to that in their own ways—different ways, but quite parallel, as I think we’ll learn.

So Joshua is a fashion designer who’s also a coder and programmer who’s really gotten into interesting ways of using AI in design and fashion, and he’s going to show something in a second—they’ve each got one slide, I told them they could have one slide. And for Philippa, or Pip, as she’s known, I have to just read her bio. I suppose you must have provided this, because you don’t see these kind of things that often. “A human-centered product designer, philosophically-minded technology researcher, and unconventional computational toolmaker, striving to enhance our creativity through embracing unexpected provocations and seemingly irrelevant questions, who like multisyllabic words.” But she’s at the Media Lab at MIT, and she recently won a prize—an art prize.

Philippa Mothersill: Yes.

Kirkpatrick: So quickly, what was the art prize that you just won.

Mothersill: It was the Laya and Jerome B. Wiesner Art Award, which is—yeah, it was really significant to me to win this, because Jerome Wiesner was one of the presidents of MIT who really integrated—

Kirkpatrick: What did you win it for?

Mothersill: Oh, what did I win it for? Oh, I won it for sort of my role in bringing arts to the community at MIT and trying to involve that discussion with the technology that we’re developing as well, which is sort of what the legacy of Jerome Wiesner was about.

Kirkpatrick: So just before we get into the details of AI, what have you done to bring arts in the community at MIT?

Mothersill: Through a range of different events we’ve held over the years, different reading groups and salons. I’m a Fellow of the Royal Society of the Arts, which I’ve sort of tried to integrate into that community there, bridging between different universities. At Harvard, there’s a great group called metaLAB. So trying to bridge and connect the communities and start these conversations, which is really important, because art and the way we think about art allows us to integrate really interesting, diverse points of view about history and philosophy and interactions and perception into the way that we critique and discuss technology.

Kirkpatrick: Good. I really—I’m glad you said that, and I didn’t know all that stuff you just said you did. But one the things that I—final thing, I mean, I want to move to Joshua in a minute. But one of the cool things about what Pip does is she’s trying to integrate ambiguity and serendipity into the design of digital systems, partly—and both of these people are deeply committed to figuring out ways that technology can enhance, not suppress, creativity. So Joshua, you are a computer scientist, which—you are not really a computer scientist per se, right?

Mothersill: Not by training, but by osmosis by being at MIT.

Kirkpatrick: By osmosis, when you’re at the Media Lab. So—but explain what you do with AI, how you think about AI, and how you’re using it in fashion, and if you want to show your slide any time, you decide when to put it up.

Joshua Mudgett: Yeah, I mean, you can put it up now. But I think the question that I started with my work is how can we take this tool, which is at its heart a tool of automation, and use it to enable people to be their most human and genuine creative selves, right? And that’s kind of my work in fashion is to say what is it about the creative process that this can help, and what do we want to stay away from? And the answer I think is that, like, there’s this part of the creative process which, as an artist I’m sure you know, there’s this part of the process where you remove yourself from everything sensical and you dive into what you know and what you find beautiful, and these images that you’ve lived through. We don’t touch that part. What I do with my work is I take that part of the creative process and I allow people to tell me it. And then the AI works on the next part, which is the iterative process, which every human goes through when they create something. You know, anybody on a daily basis goes through iterations of things that they’re trying to do, and that’s where AI finds its good stand is to really bring agency to an individual within fashion or art or design. And yeah, so what I do is I use AI to give custom garment designs to every individual based off of what they find beautiful and what they love.

Kirkpatrick: So the basic idea—well, first of all, tell us what we’re seeing in the slide.

Mudgett: Oh, I haven’t actually seen it, really. [LAUGHS] So the garment that’s in the middle is a garment that was output from Lara and I, based—it’s an output AI garment that’s designed with 3D, which is how our program works. So we design AI fashion through 3D technology. And the back of it is an MNIST, so it’s basically like the brain of an AI, so to speak, mapped out.

Kirkpatrick: So it’s really sort of background music in a way, yeah.

Mudgett: Yeah.

Kirkpatrick: So the thing that I like about the way you just said that is, one of the things that many, many people, especially who are not like those of us at Techonomy every day talking about this stuff don’t realize, AI is really just iterative technology. That’s really all it is. It’s just technology that can iterate really fast and kind of learn from—I mean, the best AI is machine learning, which is really, it just learns by doing things over and over and over and getting better at—because of just the sheer number of times, the amount of data it’s applied itself to. And it is true that people also have to iterate a lot. So what you’re saying is—and give an example really fast before we go back to Pip, because I know you have a specific thing where you said you had given students at the school you went to a tool that allowed them to find outcomes faster using AI.

Mudgett: Yeah. I mean, one of the first things I did when I developed this AI was give it—I actually let a bunch of people use it who work in fashion and design, and I said, “What’s beautiful to you? Put it in here, train this, teach it, interact with it,” and it was basically, in its very early stages, allowing designers to go—you know, typically, a designer, when you’re making fashion, would draw 50 sketches, and you really put your heart and soul into every one, and you have to dig into what’s beautiful to you and then at the end you get one, and you pick it. And I think that process is where you find writer’s block, where you find creative block, any of that, is the part where people tend to get stuck and not allow themselves to be as creative as they could be. And this tool helps—I mean, it seemed to really help that process when I gave it to them.

Kirkpatrick: Good, okay. So Pip, what are you doing, what are you working on, and how do you introduce ambiguity and serendipity into what exactly, first of all, and if you’ve got a slide, put it up.

Mothersill: Sure. So this is one of the tools that I have made, and I’ve actually customized it using some of the Techonomy words. So I—so the iteration, while it’s great for giving us tons and tons of fast results and output that we can optimize towards a certain end point, it also—the underlying technology behind that can lead to a real homogenization of ideas of design. Designers that I work with at Ideo say that we’ve reached the Pinterest singularity, because if we were to all go out and go and do a project together, go and look on Pinterest for inspiration, we’d all come back with things that looked relatively similar. And then we’d put it all back into Pinterest, and it would all learn from it again, and everything just ends up looking very similar to each other. And so I—you know, so this is where it’s—you’ve got to think about introducing, yeah, getting rid of this creative block by sort of critiquing whether a tool that optimizes to a certain extent is very valuable in finding those first creative questions that drive you into new territory.

And so I have developed a couple of digital tools that use what I call structured serendipity. And so they just use randomness, random algorithms, but that also use natural language processing to select the content that they randomly juxtapose. So you see, this tool here, it creates a very simple, inspiring prompt that just uses a random selection of words, but those words are chosen from text that you give it. So I gave it all of the titles from the talks in Techonomy today. So it’s using words that are floating around our head anyway, but juxtaposing them in new ways to help us reframe the concepts in themselves.

Kirkpatrick: Talk about some ways in which you feel that technology as it’s being designed today lacks sufficient serendipity.

Mothersill: Yeah, so I mean, that Pinterest example—I think a lot of, you know, if you go into Google and you Google anything, you’re never surprised. It’s always trying to give you what we think that we want. And these systems are really user-friendly—potentially too user-friendly. We need a little bit of conflict and crisis to help us explore new ideas in new ways and be surprised, bringing that unexpected dimension. And so, you know, there is—we have so much incredible data out there, there are so many potential answers within that data, how can we just ask really interesting questions to get to new ideas through that?

Kirkpatrick: So Joshua, I think we talked on the call about, you know, AI is just a tool like Photoshop, right? Talk about that, because there is such mythologizing around AI, and it’s really—even this cover of the “New Yorker” I mentioned this morning at the opening, where the robot is walking a dog and the person is walking a robot dog. It’s like, it sort of suggests we’re moving to a world where AI can do everything, which is just total—a crock, but how do you think of AI, and how is it like Photoshop for you?

Mudgett: Well, I think Photoshop is an iterative tool, right? So it made the process of creating art so much faster, and creating beautiful renditions of what you imagine faster. You know, I always say that I think of AI as like a baby Superman, you know? It’s not—it’s very strong, but it needs somebody to take care of it. We don’t live in an age of “I, Robot,” not yet. And it is a tool that lends itself to some bad things quite easily. Similar if you think about, like for us nerds, The Force, right, in “Star Wars,” it can be used for good or bad. And I think that the fear around it is justified, but I think that as long as we remember that this is a tool which we created and which needs us [LAUGHS]—we’ll be just fine.

Kirkpatrick: Okay. And I want to go to the audience, but one of the really interesting things you and I discussed, Pip, was how different your thinking about technology/AI is from what surrounds you at MIT. So characterize the alternate view that you are surrounded by.

Mothersill: I mean, that’s—I think—you were also at MIT as well. So I think there is probably many, many views—I wouldn’t want to generalize—but I think that one of the things that I see as an issue is that we now know that we don’t know everything, but we think that we can figure everything out. And I am not sure that truth works like that. I feel like it’s going to be a continual exploration, and—yeah, leaving room for unexpected interjections within that search is really important. And so I think that’s the difference in approach that I see, is this—yeah, leaving space for the unexpected.

Kirkpatrick: Good. And when you say “we” think we can figure everything out, you really mean the engineering mindset that believes anything can be engineered away, with Ray Kurzweil the ultimate example, the convergence of man and machine which is probably a good thing—you know, it’s so facile, but it is really an overwhelming meme within a lot of the engineering community. And you would agree, as an engineer?

Mudgett: I think that—there’s this old phrase for machine learning that the sets are fine. The training sets are fine. I mean, you’re right, once a data set is out in the world and it’s put out, we keep them for very long, and there’s never something new, and there’s not a lot of backtracking in the way that we’ve constructed these training models. And what if we went into these old training models and put something surprising in there? Yeah, I think that that’s very interesting—and does not happen in the MIT community very much.

Kirkpatrick: Okay, let’s quickly get some quick audience feedback or interaction, if there is an impulse for that. Anybody have a comment or a question, something they didn’t agree with? Yeah.

Zamchick: Hi, I’m Gary Zamchick.

Kirkpatrick: An artist, a designer.

Zamchick: I’m an artist, I work at Cornell Tech as strategic designer in residence. My question is, IBM has technology that lets you create hit songs, and they coach it as a tool for helping people write hits. Analyzes tons of hit songs, figures out what those things are that make it resonate and can play back. So my question is, at what point does a hit song generated in that manner, how does the artist take ownership of something that was generated in that manner, and also, how is that song going to be appreciated by audiences without there being an effective backstory to the artist’s inspiration? Because I think that type of analysis just takes away the backstory and the excitement of the song.

Mudgett: I mean, I have an answer I’d like to give. I think that if you think about how an artist—and this may be a bit controversial—but if you think of how a musician makes a song, the way that general intelligence works for a human is, they think back to all the music they’ve heard in their lives. There’s no such thing as really an original type of music, because we’re all referencing things that we’ve experienced throughout our lives as unique, individual people. A tool like this one from IBM does that process for you a bit, and maybe there’s slightly less agency, slightly less ownership, but I’d argue that there’s still ownership within for that process being done for you.

Mothersill: Yeah, and I think this is where I’m absolutely not against technology and AI and those sorts of things, because we do learn from the information that we have experienced, and so what these tools and technologies can do is widen that amount of information that we can consider and be inspired by. But then I think it’s just managing the sort of—the space at which we can intervene in the output of that process, and allowing for not just, “Give me the answer,” but “Give me some information that I can think about and maybe an interesting, provocative structure that can guide me to new ways to consider this information.”

Mudgett: Like, if you’ve ever had to write a paper, and you look down and think, “My god, if somebody could just write the first sentence for me, I could get this started,” that’s—

Kirkpatrick: All right, we actually don’t have time, because I’m going to get in trouble now if I don’t wrap this, but that was really interesting. We need more discussions like this, and to carry it further and to really give more detailed examples, but you guys are both on the right track, and I’m so pleased that you were able to join us. Thank you for coming down from Boston for this, and we’ll continue in further Techonomies.