Between Architecture and Craft: Exploring AI and Creativity with Joshua Vermillion

In today’s evolving landscape, AI is reshaping how artists and designers approach their work. Joshua Vermillion stands at the forefront of this transformation, blending architecture, digital craft, and AI tools to explore new possibilities in design. In this interview, we dive into Joshua’s journey from his early influences to his innovative use of AI. Whether you’re curious about the role of AI in architecture or looking for inspiration, Joshua’s insights offer a fresh perspective on the future of creative expression.

Artistic Journey & Background

Where are you from? Where did you grow up?

I grew up in the midwestern United States, not too far away from Chicago.

How did you first get interested in architecture? Was this your original background?

It sounds cliche, but I’ve always liked to create things. When I went off to university, I picked Architecture as my major, and ever since, I’ve been trying to contribute to the “project of architecture.”

What inspired you to explore the relationship between technology and architecture?

This definitely came about when I was in grad school. I had a lot of questions about digital technologies in design and I was fortunate to find specific faculty mentors who were right at the forefront of this discourse. I started going to ACADIA (Association for Computer Aided Design in Architecture) Conferences and seeing where architecture was going (digital fabrication, parametric design, building information modeling). From there I was hooked.

Visual Language & Process

How do you see AI changing the creative process compared to traditional methods?

I see several changes currently–first, is that generative-AI speeds up the beginning of the design process which is reliant on rapid ideation and iteration. This is further hastened by the ability to feed in sketches, photos, screen captures of cad models, etc and let the AI models riff, render, add textures, add atmospherics and lighting, etc.

As these tools mature and are capable of augmenting more of our tasks, you will gradually see a shift in human labor away from directly executing specific tasks. This will allow us to do what humans still do best–using our judgement and situational awareness to frame problems, apply ethical frameworks, make decisions, set strategies, adapt to new circumstances, and navigate trade-offs. I don’t see AI replacing human intelligence at all–rather, our roles will probably gradually shift.

Can you walk us through how you typically start an AI-driven project?

It depends, largely based on the project, of course, but sometimes I’ll start with trying to articulate my ideas into a statement and then into a text prompt. From there I can let the diffusion model generate enormous amounts of visual interpretations in a very short amount of time. Are the results what I expected? If not, then I might revise the language of the prompt to be more clear. Often times, though, I value the unexpected results more–the diffusion model becomes more of a provocateur and subverts my preconceptions about the design project.

Other times, I’ll start with a sketch or collage, and then feed the drawing into the diffusion model (like Midjourney, for example). I can have Midjourney describe the image, or use it to generate derivative imagery.

What’s one challenge you’ve faced when working with AI, and how did you overcome it?

Challenges pop up when you try to use AI but don’t understand its limitations. The biggest of these challenges is when the model gives you exactly what you asked for, but not really what you wanted. It’s frustrating trying to communicate to a computer through natural language sometimes (just as it can be to communicate to a human). You have to be able to understand and then value the diffusion model’s generative capacity to overcome such challenges.

I did really enjoy my collaboration with Harper’s Bazaar Magazine in Serbia but we faced constant challenges throughout the collaborative process. We made Harper’s first gen-AI cover and editorial. As a team (photographer, model, editor, and me), we quickly decided that we would combine photography with gen-AI to create a series of hybrid visuals. The images blended traditional photography of a model wearing real wardrobe into a series of fictional settings—landscapes, architectures, and props—that I generated in Midjourney. We were careful not to displace any human labor in this project, and in fact, we kept having to add human talent to overcome additional hurdles in the process. Because this was the first time I had tried this, we encountered a number of problems that I hadn’t anticipated in advance which required more team members. I think this project demonstrated to me, in a tangible way, that my future workflows are hybrid workflows. Generative AI doesn’t replace my other tools, but rather compliments them, and as a result, they augment me and my work.

Are there any unexpected discoveries you’ve made while experimenting with AI?

Many, but the first was probably when I asked ChatGPT if it could write a script to automate some procedural modeling for me in a CAD software. It actually worked which was novel and surprising to me back in 2022. Now it seems to be the point where we even have a term for it–”vibe coding.” But I think this gets to an important point about gen-AI. For those who are great at writing code, LLMs are no replacement. For those who like writing code but aren’t as good at it, they can use LLMs to troubleshoot and debug their scripts. And for those who know nothing about scripting, they can now outline a script through natural language and have LLMs write the java script or python code for them.

How has introducing AI into your teaching reshaped your students’ creative process?

There are several ways that these tools are shifting my students’ creative work. First, I would say that it is important for us in the schools and universities to come to creative terms with gen-AI. We need to be upfront and talk about these tools–treating them as taboo is counter-productive in learning how these tools can be used responsibly. We should also be upfront about their usage–transparency is key, especially as machine learning is integrated into more of our current software tools and apps.

When my students use gen-AI in my design studios, I encourage them to experiment with the various models out there and to find ways to connect them with their existing workflows. For instance, my students still have to quickly sketch ideas and build models. Now they can feed their sketches and model photos into a diffusion model and render them to explore material strategies, atmospherics, site, scale, time of year and time of day, etc. Gen-AI doesn’t replace our traditional tools and media, but rather can connect with them and augment them. I also want them to see diffusion models as a partner in their brainstorming and design thinking. A partner that can be part devil’s advocate when interpretations of their prompts lead to results that subvert a student’s preconceptions of the design problem or lead them to think about a design response in different ways. If AI doesn’t think like we do, then I say let’s make that a feature rather than a bug. Already I see a change in my design students in that they tend to write more and speak the language of design better now that they have to describe their goals, desires, and concepts in natural language to the machine.

Beyond just integrating these tools into their traditional workflows, I’m also interested in having students push further to ask how AI could fundamentally change how we design or what we can design. Interrogating gen-AI as a new medium with its own affordances and biases. Every spring, I teach a seminar course about generative AI and we ask this question while reading and discussing essays about AI. We have very interesting and fruitful conversations and debates about important topics such as intelligence, labor, creativity, and other key topics.

Vision & The Future

How do you see your role evolving as AI continues to develop?

A central theme to how I’ve always worked has been the investigation of material and lighting effects. This is something that has stuck with me ever since working with a carpenter, followed by making furniture pieces, and then room-scaled spatial installations with students and colleagues. I continue this line of thought with AI. The latent diffusion models, like Midjourney, are so interesting to experiment with—before ever setting foot in the workshop, or spending money on materials. I find that these AI-generated images are good “sketches” in the sense that they tease my imagination as they quickly simulate and visualize my ideas.

How do you think AI will influence the future of architecture and design education?

I would say “buckle up” for a fast and potentially turbulent ride. The changes (already quick) seem to be accelerating. I would emphasize life-long learning (or learning how to learn) in order to stay on top of novel technologies and therefore to stay relevant as the technological landscape shifts under foot. For some of these technologies, there are significant and steep learning curves. Other technologies are simply evolving very fast, at a speed where it is hard to keep up, so having a digital literacy is important to navigate this future. 

The bigger shift that I anticipate happening from a pedagogical perspective is the slow de-valuing of education as skills-based work-force training (these types of jobs are the first to be obsolete), and a shift towards emphasizing the application of innately human characteristics such as empathy, ethics, critical thinking, situational awareness, aesthetics / taste, and framing / analyzing problems. As specific tasks become obsolete or automated, we need to lean into the important parts of design that can’t be automated.

Are there any common misconceptions about AI in art that you’d like to clear up?

A big misconception that I’ve noticed is how the general public thinks about art and creativity as (1) being solely the domain of a human author, and (2) reliant only on analog tools such as a pencil or paintbrush. Of course, anyone who actually studies art’s history and theoretical underpinnings, knows that there are many arguments refuting both of those notions. The public needs a deeper understanding and education about art. And as an architect, it’s very puzzling to me.

Architects had these conversations thirty years ago during the digital revolution, and now there’s no debate that digital technologies and creativity go hand-in-hand. Architecture and other design fields use advanced technologies all the time. And perhaps in the not-too-distant future, we might see that intelligence and creativity aren’t exclusive to humans. That there might be more, non-biological forms of intelligence and creativity. I try to steer away from such anthrocentric / pre-Coppernican / narrow-minded stances–such as humans being the only beings in the universe who can create “art.” At one point in time, humans thought the entire universe revolved around us. Now we call that era the Dark Ages.

What advice would you give to artists or designers just starting to explore AI?

Engage. Be critical, but engage with these tools while you critique.

Looking ahead, what excites you the most about the future of AI in creative fields?

Let me start by acknowledging that the majority of traditional media and tools in our creative fields won’t go away. AI doesn’t make tangible sculptures or paintings. It can’t perfectly or accurately replicate a moment in time as well as a photograph can. So from this basic premise, I think technologists need a bit of humility when talking about these new tools—generative AI in particular.

For those of us pushing at the boundaries of creativity and technology, I think we have to ask ourselves, what role do we see or want AI to play in augmenting our work. Is it simply a tool? Or a partner? These are certainly provocative questions for the future. This is a problem, in so much that designers and companies need to be able to claim ownership of their intellectual property, but the problem lies ahead. No one right now would design an entire project using only images from a generative-AI diffusion model—designing something as complex as architecture still requires substantial human input. Design is way more than a few flashy quick images. Even if an architectural design is very derivative of an AI-generated image, the design process is a highly transformative process conducted by humans and still protected by copyright. 

However, in the future, these systems will probably become much more robust and sophisticated, and we really need to ask some tough questions about authorship and ownership. Already, there is a blurring taking place between completely new generative-AI tools and workflows versus integrating more AI tools into our traditional workflows and software. Now, it’s actually really hard to use Photoshop (for one instance) without relying on tools that were trained through Machine Learning (masking, selecting, background removal, neural filters, upscaling, and now even generative fill). As these technologies start to mature, AI-assistance with many of our tasks will probably become much more pervasive and ubiquitous. It won’t be black and white, it will be much more nuanced. Perhaps the future of design will involve humans and machines collaborating and co-creating our future.

What services do you provide? How can people find you?

My day job as a professor is teaching and scholarship, so I’ve always spent a significant part of my time explaining and demystifying big ideas and concepts to audiences. Lately, that has translated into speaking about the topic of AI around the globe. I also exhibit creative work around the world and sometimes collaborate with other individuals or companies. Besides speaking, I also consult and perform workshops for companies.

Back to blog