AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators
Anca-Simona Horvath, Panagiota Pouliou
AI for conceptual architecture: Reflections on designing with text-to-text, text-to-image, and image-to-image generators
In this paper we present a research-through-design study where we employed text-to-text, text-to-image, and image-to-image generative tools for a conceptual architecture project for the eVolo skyscraper competition. We trained these algorithms on a dataset that we collected and curated, consisting of texts about and images of architecture. We describe our design process, present the final proposal, reflect on the usefulness of such tools for early-stage design, and discuss implications for future research and practice. By analysing the results from training the text-to-text generators we could establish a specific design brief that informed the final concept. The results from the image-to-image generator gave an overview of the shape grammars of previous submissions. All results were intriguing and can assist creativity and in this way, the tools were useful for gaining insight into historical architectural data, helped shape a specific design brief, and provoked new ideas. By reflecting on our design process, we argue that the use of language when employing such tools takes a new role and that three layers of language intertwined in our work: architectural discourse, programming languages, and annotations. We present a map that unfolds how these layers came together as a contribution to making machine learning more explainable for creatives.
Machine learning / StyleGAN2-ADA / RNN TensorFlow / VQGAN + clip / AD journal / eVolo / Conceptual design / Architectural design
/
〈 | 〉 |