Applicability Evaluation and Reflection on Artificial Intelligence-based "Image to Image" Generation of Landscape Architecture Masterplans

Huaiyu ZHOU, Shuangbin XIANG

Landsc. Archit. Front. ›› 2024, Vol. 12 ›› Issue (2) : 58-67.

PDF(4034 KB)
Landsc. Archit. Front. All Journals
PDF(4034 KB)
Landsc. Archit. Front. ›› 2024, Vol. 12 ›› Issue (2) : 58-67. DOI: 10.15302/J-LAF-1-020094
PAPERS

Applicability Evaluation and Reflection on Artificial Intelligence-based "Image to Image" Generation of Landscape Architecture Masterplans

Author information +
History +

Abstract

Artificial intelligence (AI) image generation is revolutionizing traditional workflow in landscape architecture industry, among which the "image-to-image" generative adversarial network (GAN) exhibits potential to facilitate concept design. Therefore, it underscores the importance of applicability evaluation from the perspective of users. This research aims to evaluate the quality of the GAN-generated results, their effectiveness in integrating with design workflows, and the landscape architects' acceptance of the results through image analysis and user survey. The evaluation focuses on layout generation and masterplan rendering within the Pix2Pix–BicycleGAN workflow. The evaluation metrics of image analysis including block number absolute/Euclidean distance, histogram distance, and structural similarity index measure, were employed. Additionally, the online survey with two questionnaires was conducted to evaluate the visual realism and preference for color and texture of the GAN-generated results. The findings indicate that the GAN-generated layout exhibits a high similarity to the human-designed layout, and the GAN-rendered masterplans fulfill the criteria for concept design and garner positive user acceptance. Conclusively, this study delves into the intrinsic rationality of the GAN generation methods and limitations in professional ethics and data bias, reflecting on the gaps between current AI-assisted design methods and evidence-based design.

● Quantitative applicability evaluation of "image to image" landscape masterplan generation method

● Image analysis reveals a high similarity between GAN-generated and human-designed layouts

● User survey reveals a high visual realism and practitioners' high acceptance of GAN-rendered masterplans

● Identifies the intrinsic rationality of current GAN generation methods and the technical gaps between these methods and evidence-based design

Graphical abstract

Keywords

Landscape Architecture / Image Generation / Generative Adversarial Network / Artificial Intelligence-Assisted Design / Applicability Evaluation / Landscape Masterplan

Cite this article

Download citation ▾
Huaiyu ZHOU, Shuangbin XIANG. Applicability Evaluation and Reflection on Artificial Intelligence-based "Image to Image" Generation of Landscape Architecture Masterplans. Landsc. Archit. Front., 2024, 12(2): 58‒67 https://doi.org/10.15302/J-LAF-1-020094

1 Introduction

In recent years, the rapid development and enhancement of image generation technologies and mapping tools driven by generative artificial intelligence (AI) have significantly impacted the traditional landscape design industry[1]~[3]. Thus, it is pressing for landscape architects to delineate the relationship between image generation and landscape design and explore potential opportunities of practice and research. The applicability evaluation from the landscape architects, the users of image generation technologies, can assist in analyzing its potential impact, optimizing tool selection, and ultimately enhancing design efficiency. Presently, image generation technologies are primarily applied in masterplan generation and perspective rendering in the landscape design workflow.
Research on masterplan generation primarily focuses on "image-to-image" generative adversarial network (GAN). The application of these tools has developed from the generation of architectural floor plans[4]~[6] to generating building arrangements and massing relationships[7]~[10]. In recent years, relevant research also initiated in the field of landscape architecture on masterplan generation: Huaiyu Zhou et al. established a labeled masterplan dataset and adopted CycleGAN for landscape masterplan recognition and rendering[11]; Guangbin Qu et al. generated functional layouts of landscape with CGAN in residential areas that meet design specifications[12]; Ran Chen et al. utilized StyleGAN2 to generate diverse design schemes, discovering that GAN models can recognize and extract high-dimensional abstract features of vegetation, water bodies, pavements, and road networks[13]; Guanjie Zhao explored automated design processes for small-scale landscapes by coupling Pix2Pix and Stable Diffusion models[14]; Weishi Zhou used Pix2Pix to generate masterplans for urban pocket parks[15]. Despite the in-depth discussions on training principles, datasets, and generation methods in existing studies, several issues remain: there lacks publicly accessible landscape masterplan datasets, which limits the diversity of training data; the scale of the generable masterplans is constrained and mainly suitable for small- and medium-sized green spaces; the targeted systematic quantitative evaluation of GAN-generated masterplans is insufficient, lacking user-friendly evaluation metrics; and there are limited user-side surveys, making it difficult to obtain usage evaluations.
① CycleGAN (Cycle Generative Adversarial Network) enables unsupervised image-to-image translation between two different image domains without requiring paired training data, making it suitable for style transfer.
② CGAN (Conditional Generative Adversarial Network) introduces additional conditions or labels in the image generation process, ensuring that the generated results are influenced not only by random noise but also by specific conditions.
③ StyleGAN (Style Generative Adversarial Network) enhances the diversity of generated images through the training of style vectors, excelling in image detail and large-scale dataset processing.
④ Pix2Pix ("Image-to-Image" Generative Adversarial Network) generates corresponding output images based on input ones, such as converting black-and-white images to color or converting sketches to photos, excelling in image synthesis tasks.
Relevant research and applications of rendering focus on two major "text-to-image" tools: Midjourney and Stable Diffusion. The Midjourney model can generate highly refined and realistic human perspective views or bird's-eye views on web platforms using various prompts, making it user-friendly. In contrast, the open-source Stable Diffusion model can not only generate images through keywords but also offer "image-to-image" and "model-to-image" training functions, allowing designers to add constraints, thereby gaining more popularity. Currently, a Stable Diffusion-based workflow for architectural form conception and modeling has been established[16] [17].
As technology iterates and advances, design workflows have evolved from hand-drawing to CAD drafting and then to parametric design with Grasshopper. Designers have facilitated new workflows by integrating emerging technologies in practice and actively conducting evaluations[18]. This study focuses on GAN-based landscape masterplan generation methods ("GAN generation methods" hereafter), comprehensively assessing their technical applicability from the perspective of landscape architects to provide references for tool selection. The core advantage of GAN generation methods in "image-to-image" tasks is their mapping efficiency, which reduces the time consumption on concept comparisons and repeated rendering[11]. Therefore, this study aims to evaluate the quality of the GAN-generated results, their effectiveness in integrating with design workflows, and the landscape architects' acceptance of the results through image analysis and user survey.

2 Evaluation Object: Pix2Pix–BicycleGAN Workflow

This study focuses on the adaptability evaluation of two key tasks in the Pix2Pix–BicycleGAN landscape masterplan generation workflow—layout generation and masterplan rendering. GAN-generated layouts are similar to functional bubbles and schematic sketches in design education, representing an intuitive and straightforward way of thinking that forms the basis for design iteration and adjustment. GAN-rendered images add details of color and texture to the abstract layout, enhancing their readability. Pix2Pix[19] is a widely used task implementation model in the GAN field; while BicycleGAN[20], an improved model of CycleGAN[21], introduces additional variables and constraints, improving the model's performance in handling multimodal data and high-resolution images and supports the output of various rendering results. Due to the limited types of masterplans in the collected and labeled dataset, this workflow is mainly applicable to small and medium-scale landscape presently[11][13]~[15] (Fig.1).
Fig.1 Example of layout generation and masterplan rendering in the Pix2Pix–BicycleGAN workflow.

Full size|PPT slide

2.1 Generating Layouts in Various Styles

By inputting the site boundary into the Pix2Pix model, layouts in various styles (mixed, curvilinear, polyline, and organic styles) containing different land-use types can be generated, including green spaces, paths, activity nodes, small architectural features, and waterscapes (Fig.1). The evaluation of GAN-generated layouts focuses on their morphological similarity and visual realism compared to the human-designed layouts.
In this study, a total of 2,725 human-designed landscape masterplans were collected, with training sets of 2,670, 916, 770, and 954 images for mixed, curvilinear, polyline, and organic styles, respectively. An additional validation set of 85 masterplans was reserved for evaluating the generation results. The converted site layouts in PNG or JPEG format display the site boundary filled in black with entrance locations marked with blue circles (their size indicating entrance level). Based on the four styles, 340 GAN-generated layouts (85×4) were gathered for subsequent evaluation. Landscape architects, after comparing multiple GAN-generated layouts, can continue to adjust the formal design, supplement planting, refine land use division, and form more refined site layouts based on project requirements and personal experience, using these as inputs for the masterplan rendering.
⑤ The Mixed training set is a collection of the curvilinear, polyline, and organic training sets.
⑥ For specific model training methods, see Ref. [11].

2.2 Rendering Masterplans with Various Colors and Textures

Landscape architects can input the adjusted layouts into BicycleGAN to generate rendered masterplans with different colors and textures[20], facilitating efficient communication of design concepts with clients. The evaluation of this task primarily focuses on the similarity between GAN-rendered masterplans and manually rendered masterplans (such as those colored by hand or using tools like Adobe Photoshop) and user preferences for colors and textures. The dataset included 325 landscape layouts, with a training set of 300 layouts and a validation set of 25 layouts[11]. Since BicycleGAN generates multiple rendering results, one warm-toned rendering and one cold-toned rendering were selected for each layout, totaling 50 outputs for evaluation. The rendered masterplans generated using the same layouts with Pix2Pix and CycleGAN models from the author's previous research[11] were also included in further image analysis and user survey.

3 Evaluation Methods

The evaluation of the similarity between GAN-generated and human-designed layouts, as well as between GAN-rendered and manually rendered masterplans, focuses on the features of the images themselves and is therefore suitable for image analysis. Conversely, the evaluation of the visual realism of the generated layouts and preferences for the rendered masterplans is better suited to user surveys. Hence, this study integrates image analysis and user surveys to establish an evaluation metric system.

3.1 Image Analysis Metrics

3.1.1 Evaluation Metrics for Layout Generation

(1) Block number distance
The block number (BN) of the five generated land-use types can most directly reflect the morphological diversity of GAN-generated layouts. The corresponding block number distance (BND) can be used to assess the differences between the 340 validation set layouts generated by Pix2Pix and the human-designed layouts. BND evaluation includes the calculation of absolute BND and Euclidean distance. In this study, the absolute distance was used to compare the difference in the BN of various land-use types (manually counting complete color blocks with the same RGB value) between the generated layouts and the human-designed layouts for each single style. Additionally, to further analyze the impact caused by the different numbers of layouts of each style's training set, this study conducted cluster analysis of absolute BND and Euclidean BND to compare the differences in the clustering degree of land use division among the four styles. The midpoint clustering regions of the two sets of data were presented in a cluster analysis graph.
(2) Histogram distance
An image histogram shows the frequency distribution of different RGB pixels in an image. Histogram distance (HistD) is a key metric for measuring the pixel distribution differences between two images[22]. Based on the one-to-one correspondence between the RGB values in the GAN-generated layouts and the land-use types, HistD can effectively assess the variation in land use division and area proportion between GAN-generated layouts and human-designed layouts. In this study, Bhattacharyya distance was used to quantify the distance between two normalized histograms:
HistD=1ih1(i)h2(i),
where h1(i) and h2(i) represent the frequency of the RGB value i in the histograms of GAN-generated layouts and human-designed layouts, respectively. The range of HistD is [0, 1], with 0 indicating identical histograms, and values less than 0.5 indicating a similar overall trend (Fig.2).
Fig.2 Diagram of block number distance and histogram distance methods.

Full size|PPT slide

3.1.2 Evaluation Metrics for Masterplan Rendering

The structural similarity index measure (SSIM) is a widely used tool for image similarity assessment, measuring the perceptual differences between two homogenous images (x, y) that have undergone different processes[23]. SSIM primarily evaluates the impact of luminance, contrast, and structural features on visual perception based on two-dimensional grayscale images. In this study, SSIM is calculated to assess the differences between GAN-rendered and manually-rendered masterplans by landscape architects:
SSIM(x,y)=[l(x,y)]α[c(x,y)]β[s(x,y)]γ,
where l(x, y) represents luminance, c(x, y) represents contrast, and s(x, y) represents structure. The parameters α, β, γ are greater than 0 and typically take the value of 1. The range of SSIM is [0, 1], with 1 indicating identical structures and 0 indicating completely different structures.
Additionally, the previously mentioned HistD can also be applied to evaluate the color distribution differences between two homogenous rendered masterplans, which was thus included for masterplan rendering evaluation.

3.2 Metrics for User Surveys

To evaluate whether GAN-generated layouts can visually mimic the human-designed ones and to understand professionals' color and texture preferences for renderings created by mainstream GAN models including BicycleGAN, it is necessary to include the practitioners in the surveys[24]. From September 1 to October 31, 2023, the research team conducted two online surveys using Sojump, targeting teachers, students, and professional designers in landscape architecture and related fields. The surveys were distributed to the School of Architecture and Planning at Hunan University, the School of Architecture at Tsinghua University, and the Beijing General Municipal Engineering Design & Research Institute Co., Ltd. Respondents were required to indicate their years of study or professional experience to ensure the representativeness and reliability of the results.

3.2.1 Questionnaire One

Questionnaire One aimed to conduct a Turing test for GAN-generated layouts and evaluate practitioners' acceptance of them. The questionnaire included 30 layouts, of which 16 were randomly selected from the Pix2Pix-generated layouts of the validation set, and 14 were layout redrawings of design from renowned firms or designers. Respondents were asked to identify the images they believe were generated by AI, with no limit on the number of selections (Fig.3). These layouts are displayed in full screen on mobile devices with a resolution of 256 × 256 pixels, which is sufficient for estimation.
Fig.3 Online survey Questionnaire One (orange numbers represent GAN-generated layouts).

Full size|PPT slide

3.2.2 Questionnaire Two

The objective of Questionnaire Two was to evaluate the acceptance of renderings generated by mainstream GAN models. The questionnaire provided 30 rendered masterplans (10 groups, each containing 3 images from Pix2pix, CycleGAN, and BicycleGAN). Respondents were asked to determine whether the renderings met the standards for concept communication and to choose the best rendering from each group according to color and texture (Fig.4). To facilitate detailed comparison, the masterplans were enlarged to 1,024×1,024 pixels and displayed in full screen on mobile phones.
Fig.4 Online survey Questionnaire Two.

Full size|PPT slide

4 Evaluation Results

4.1 Image Analysis

4.1.1 Layout Generation Evaluation Results

A comparison between GAN-generated layouts and human-designed layouts reveals that both exhibit similar levels of diversity in land use BN statistically, with significant similarity in land area proportions.
1) According to the Quantile-Quantile plots and Shapiro–Wilk test results, the BN of five land use types in 340 GAN-generated layouts and human-designed layouts all follow a normal distribution. The average absolute BND calculation results (Tab.1) show that for individual layouts, the differences in the number of the five land use types between GAN-generated and human-designed layouts are all less than 5. The main differences lie in the number of small architectural features, suggesting that GANs and designers exhibit similar diversity in land use division.
Tab.1 Average absolute BND between GAN-Generated layouts and human-designed layouts
Land use typeAverage BN of human-designed layoutsAverage absolute BND of GAN-generated Layouts
MixedCurvilinearCurvilinearOrganic
Green space15.61.42.53.21.7
Activity node6.52.12.53.22.1
Small architectural feature12.04.34.34.83.0
Path2.21.62.21.82.3
Waterscape1.62.82.71.92.0
2) To determine whether the differences in the number of layouts in the four styles' training sets would lead to significant differences in the BND results, this research further conducted a cluster analysis of the absolute BND and Euclidean BND for the four styles and five land-use types (Fig.5). The results show that, for the same type of land-use block, the distributions of the four styles generally exhibit a clustering trend. This indicates that the four styles have strong similarities in land use division, and the different quantities in the training sets did not significantly affect the training results.
Fig.5 Cluster analysis of absolute BND and Euclidean BND between GAN-generated layouts and human-designed layouts.

Full size|PPT slide

3) The average HistD values for the four styles are all less than 0.5: 0.41 (mixed style), 0.45 (curvilinear style), 0.41 (polyline style), and 0.43 (organic style), indicating a similar trend of the overall area proportion of different land use types in GAN-generated layouts to the human-designed layouts.

4.1.2 Masterplan Rendering Evaluation Results

The average SSIM and HistD values of the 50 rendered images were calculated, and the results are shown in Tab.2: the average SSIM values for the warm-toned and cold-toned renderings by BicycleGAN are 0.786 and 0.790, respectively, which are close to 1; the average HistD values for the two tones are 0.391 and 0.406, respectively, which are less than 0.5. Additionally, when comparing the rendering results of Pix2Pix and CycleGAN, it was found that their average SSIM and HistD values are slightly different from those of BicycleGAN, indicating a need to further investigate user preferences through surveys. Overall, the analysis results suggest that GAN-rendered masterplans are highly similar to those rendered by professional designers in terms of pixel distribution, structure, contrast, and luminance.
Tab.2 Average SSIM and HistD of GAN-rendered masterplans and human-designed masterplans
GAN modelAverage SSIMAverage HistD
BicycleGANWarm-toned0.7860.391
Cold-toned0.7900.406
Pix2Pix0.7830.359
CycleGAN0.7950.436

4.2 User Survey

4.2.1 Results of Questionnaire One

There are 192 valid responses received for Questionnaire One, of which 105 respondents had a background in landscape architecture, and the remainder came from related fields covering architecture, planning, and graphic design. Notably, 55% of respondents had over five years of professional experience, ensuring the reliability of the results. The findings indicate an average probability of 54.7% that the 16 GAN-generated layouts were identified to be AI generation outputs (Fig.6), which was slightly higher than the probability of random guessing. Additionally, GAN-generated layouts had about a 45% probability of being mistakenly identified as designer-created layouts. Layout No. 27 (GAN-generated), deceived more than 55% of respondents. Furthermore, human-designed layouts had a probability of approximately 25% to be considered as GAN-generated. Overall, GAN-generated layouts can confuse some respondents, and about 70% of respondents believe that GAN technologies have the potential to assist concept design.
Fig.6 Average probability of GAN-generated layouts and human-designed layouts being identified as AI generation output.

Full size|PPT slide

The study further communicated with respondents through phone calls, WeChat, emails, etc., to understand how they distinguish whether a layout is AI-generated or human-designed. It was found that unreasonable details in functional design severely compromise the visual realism of GAN-generated layouts. The study categorized the defects in GAN-generated layouts into three aspects (Fig.7): 1) incomplete entrance, which is too small or lacks connections to internal roads, preventing access to the site; 2) discontinuous path, which may be interrupted by green spaces or outdoor facilities and impedes movements; 3) inaccessible node, which is the isolated spaces that cannot be reached via paths. The study further quantified the average occurrence of these defects in the 340 GAN-generated layouts. As shown in Tab.3, the issue of discontinuous path is the most prominent.
Fig.7 Examples of three kinds of defects in GAN-generated layouts and adjustment solutions from landscape architects.

Full size|PPT slide

Tab.3 Summary of defects in GAN-generated layouts
DefectAverage number
MixedCurvilinearPolylineOrganic
Incomplete entrance2.12.52.24.0
Discontinuous path3.34.85.85.0
Inaccessible node1.62.21.82.0

4.2.2 Results of Questionnaire Two

Questionnaire 2 received a total of 422 valid responses, among which 233 respondents (55%) had a background in landscape architecture, and 155 respondents (37%) had more than five years of professional experience. The results indicate that 91% of respondents believe that the quality of GAN-rendered masterplans meets the requirements for conceptual design refinement and communication. Regarding the renderings generated by BicycleGAN, Pix2Pix, and CycleGAN, 47% of respondents considered BicycleGAN to perform the best in color and texture, suggesting that the model upgrade from CycleGAN to BicycleGAN was well-received by users.

5 Conclusions and Discussion

This study introduced image analysis and user survey metrics to evaluate the applicability of GAN generation methods, aiming to fill the gap that existing research primarily focuses on model training methods but lacks post-evaluation. This research provides an accessible evaluation framework for "image-to-image" generative design research. The image analysis results show that both the diversity similarity of land use distribution in GAN-generated layouts and human-designed layouts and the similarity between GAN-rendered masterplans and those rendered by designers achieved a high level. The user survey results indicate that GAN-generated layouts are difficult to distinguish from human-designed ones, with their rendered colors and textures accepted by landscape architects. Referring to the explanation of the paradigms of landscape design thinking by Yufan Zhu[25], the inherent rationality of the "image-to-image" model can be demonstrated from a historical perspective. Experienced the limited styles of classical gardens, the late 19th and early 20th-century artists such as Piet Cornelis Mondrian, Jean Arp, and Roberto Burle Marx, vividly illustrated the empathy between painting and landscape design—abstract forms guide the spatial structure of landscape[25]. As an interdisciplinary field of engineering and art, modern landscape architecture largely relies on intuition, experience, and emotion, making its forms characterized by nonlinear processes and become challenging to quantify. Artificial neural networks share similarities with human neurons, and efforts are being made to train them to learn how pioneer designers applied limitless styles to diverse sites. Although the GAN generation model involves black-box processes, this study provides quantitative support for the rationality of its internal logic.
This research has several limitations. First, it did not include an ethical evaluation of GAN generation methods. Currently, GAN generation methods have raised discussions in professional ethics about how to maintain designers' creativity and in design education about how to reasonably integrate AI technology in study. Generally, it is necessary to address functional requirements in design according to regional and environmental contexts. However, GAN generation methods often lack an understanding of form symbols under complex historical and cultural influences. The questionnaires lacked focus on the originality of GAN generation methods, and future research needs to collect user opinions on ethical issues. Second, the established evaluation framework does not consider the diversity of GAN-generated layouts and data bias. The AI outputs are significantly influenced by their training data. Currently, the diversity of landscape masterplan datasets is severely lacking, especially with few for classical Chinese gardens compared to modern landscape projects. Application of these tools can lead to homogenized design results. Future research needs to explore how to maintain the diversity of design. Taking design courses as an example, students who have not yet established a comprehensive knowledge system may lack the ability to judge the quality of datasets when using AI tools. Simply using GAN-generated layouts to complete coursework could limit the process of knowledge acquisition and design skill development. Furthermore, while the Pix2Pix–BicycleGAN workflow evaluated in this study is representative, it does not reflect the latest technological iterations. Future research could explore customized GAN models for specific regions or types of landscape design (e.g., the classical Chinese gardens, the Western modern landscapes), incorporating more data with regional features for model training and developing algorithms that can identify and emphasize these features[26].
In addition, evidence-based design challenges GAN generation methods due to its relatively low interpretability[27] [28]. Apart from morphology, the scientific thinking of "design with nature" requires the integration of various factors (e.g., topography, soil, runoff, and vegetation) to justify design decisions. The application of physical models and monitoring technologies, including hydrodynamic modeling and Internet of Things (IoT) technology[29]~[31], helps increase the interpretability of the design. Therefore, connecting the morphological expressions generated by GAN models with the quantitative analyses (e.g., physical models) should be overcome for deeply integrating AI into design disciplines. The research team from South China Agricultural University has already attempted to couple GAN generation methods with evidence-based health design strategies to design age-friendly gardens[32] [33]. As the diversity of GAN-generated layouts increases, future utilization of multi-optimization algorithms to screen and improve the layouts will help enhance the scientific decision of design[34]. With the ongoing updates of generative algorithms, there are opportunities to gradually integrate physical models and optimization algorithms with AI models, significantly improving the interpretability and applicability of GAN generation methods.

References

[1]
Bao, R. (2019) Research on intellectual analysis and application of landscape architecture based on machine learning. Landscape Architecture, 26 (5), 29– 34.
[2]
Zhao, J. , & Cao, Y. (2020) Review of artificial intelligence methods in landscape architecture. Chinese Landscape Architecture, 36 (5), 82– 87.
[3]
Zhao, J. , Chen, R. , Hao, H. , & Shao, Z. (2021) Application progress and prospect of machine learning technology in landscape architecture. Journal of Beijing Forestry University, 43 (11), 137– 156.
[4]
Huang, W., & Zheng, H. (2018). Architectural Drawings Recognition and Generation Through Machine Learning. In: P. Anzalone, M. D. Signore, & A. J. Wit (Eds.), Proceedings of the 38th Annual Conference of the Association for Computer Aided Design in Architecture (pp. 18–20). ARCADIA.
[5]
Nauata, N., Chang, K. H., Cheng, C. Y., Mori, G., & Furukawa, Y. (2020). House-GAN: Relational generative adversarial networks for graph-constrained house layout generation. In: A. Vedaldi, H. Bischof, T. Brox, & J.-M. Frahm (Eds.), Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I (pp. 162–177). Springer.
[6]
Newton, D. (2019) Deep generative learning for the generation and analysis of architectural plans with small datasets. Proceedings of 37th eCAADe and 23rd SIGraDi Conference, (2), 21– 28.
[7]
Chen, M. , Zheng, H. , & Wu, J. (2022) Computational design of multi-functional system based on generative adversarial networks: Taking the layout generation of Vocational and Technical College as an example. Architectural Journal, (S1), 103– 108.
[8]
Lin, W. (2020). Research on automatic generation of primary school schoolyard layout based on deep learning [Master's thesis]. South China University of Technology.
[9]
Sun, C. , Cong, X. , & Han, Y. (2021) Generative design method of forced layout in residential area based on CGAN. Journal of Harbin Institute of Technology, 53 (2), 111– 121.
[10]
Zhang, T. (2020). Experiments on generation of the arrangement of residential groups based on deep learning [Master's thesis]. Nanjing University.
[11]
Zhou, H. , & Liu, H. (2021) Artificial intelligence aided design: Landscape plan recognition and rendering based on deep learning. Chinese Landscape Architecture, 37 (1), 56– 61.
[12]
Qu, G. , & Xue, B. (2022) Generative design method of landscape functional layout in residential areas based on Conditional Generative Adversarial Nets. Low Temperature Architecture Technology, 44 (12), 5– 9.
[13]
Chen, R. , & Zhao, J. (2023) Generation and design feature recognition of landscape architecture scheme based on style-based generative adversarial network. Landscape Architecture, 30 (7), 12– 21.
[14]
Zhao, G. (2023). Research on application of generative model in landscape design [Master's thesis]. Shanxi University.
[15]
Zhou, W. (2023). Design research on pocket park plan layout generation based on deep learning [Master's thesis]. Chongqing Jiaotong University.
[16]
Huang, Y. , & Zhou, Y. (2023) Exploration on the generative architecture design method with AIGC technology: A case of the overall design process of generating architectural image with prompt as a key word. Urbanism and Architecture, 20 (15), 202– 206.
[17]
Chen, J. , Shao, Z. , & Hu, B. (2023) Generating interior design from text: A new diffusion model-based method for efficient creative design. Buildings, 13 (7), 1861.
[18]
Turrin, M. , von Buelow, P. , & Stouffs, R. (2011) Design explorations of performance driven geometry in architectural design using parametric modeling and genetic algorithms. Advanced Engineering Informatics, 25 (4), 656– 675.
[19]
Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image Translation With Conditional Adversarial Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1125–1134). IEEE.
[20]
Zhu, J.-Y., Zhang, R., Pathak, D., Darrell, T., Efros, A. A., Wang, O., & Shechtman, E. (2017). Toward Multimodal Image-to-image Translation. Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 465–476). Curran Associates Inc.
[21]
Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired Image-to-image Translation Using Cycle-Consistent Adversarial Networks. Proceedings of the IEEE International Conference on Computer Vision (pp. 2223–2232). IEEE.
[22]
Cha, S.-H. , & Srihari, S. N. (2002) On measuring the distance between histograms. Pattern Recognition, 35 (6), 1355– 1370.
[23]
Hore, A., & Ziou, D. (2010). Image Quality Metrics: PSNR vs. SSIM. 2010 20th International Conference on Pattern Recognition (pp. 2366–2369). IEEE.
[24]
Geman, D. , Geman, S. , Hallonquist, N. , & Younes, L. (2015) Visual turing test for computer vision systems. Proceedings of the National Academy of Sciences, 112 (12), 3618– 3623.
[25]
Zhu, Y. (2022) Disordering and redirecting: Paradigm of design thinking in contemporary landscape architecture. World Architecture, (11), 36– 37.
[26]
Jiang, F. , Ma, J. , Webster, C. J. , Li, X. , & Gan, V. J. (2023) Building layout generation using site-embedded GAN model. Automation in Construction, (151), 104888.
[27]
Li, P. , Liu, B. , & Gao, Y. (2018) An evidence-based methodology for landscape design. Landscape Architecture Frontiers, 6 (5), 92– 101.
[28]
Yang, Y. , & Lin, G. (2020) The development, connotations, and interests of research on landscape performance evaluation for evidence-based design. Landscape Architecture Frontiers, 8 (2), 74– 83.
[29]
Zhou, H. , Jiang, H. , & Liu, H. (2021) Process visualization and performance evaluation of stormwater management in landscape projects based on IoT online monitoring. Chinese Landscape Architecture, 35 (10), 29– 34.
[30]
Zhou, H. , & Liu, H. (2021) IoT-based operational information management for built landscape projects: From vacancy to approaches. Landscape Architecture Frontiers, 9 (2), 83– 95.
[31]
Zhou, H. , Li, R. , Liu, H. , & Ni, G. (2023) Real-time control enhanced blue-green infrastructure towards torrential events: A smart predictive solution. Urban Climate, (49), 101439.
[32]
Li, H., Zhang, Z., Liu, K., Chen, W., Wei, W., Liu, X., Xie, J., Zhang, M., Huang, Z., Zhong, M., Cai, C., Huang, X., Hou, Y., Lin, X., Yu, S., Fang, Y., & Feng, X. (2023, November 25). Toward dynamic optimization: Combining AI and EBHDL for the elderly. American Society of Landscape Architects.
[33]
Chen, C. , Li, H. , Hou, Y. , & Liu, J. (2023) Application progress of computer vision in the research on relationship between landscape and health. Landscape Architecture, 30 (01), 30– 37.
[34]
Liu, H. , Jin, C. , & Yang, Y. (2023) Study on the programming language and its organicity of architectural generative design. Urbanism and Architecture, 20 (5), 182– 186.

RIGHTS & PERMISSIONS

© Higher Education Press 2024
AI Summary AI Mindmap
PDF(4034 KB)

Supplementary files

Supporting Information (341 KB)

1239

Accesses

1

Citations

Detail

Sections
Recommended

/