Cerebral organoids and organoid intelligence: ethical challenges and governance pathway

Huiyu Luo , Xiangdong Xie

Front. Med. ›› 2025, Vol. 19 ›› Issue (6) : 1311 -1319.

PDF (714KB)
Front. Med. ›› 2025, Vol. 19 ›› Issue (6) :1311 -1319. DOI: 10.1007/s11684-025-1193-8
PERSPECTIVE

Cerebral organoids and organoid intelligence: ethical challenges and governance pathway

Author information +
History +
PDF (714KB)

Cite this article

Download citation ▾
Huiyu Luo, Xiangdong Xie. Cerebral organoids and organoid intelligence: ethical challenges and governance pathway. Front. Med., 2025, 19(6): 1311-1319 DOI:10.1007/s11684-025-1193-8

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Cerebral organoids, three-dimensional (3D) brain models derived from human pluripotent stem cells, have become indispensable tools in neuroscience. By recapitulating aspects of embryonic brain development in vitro, they offer superior physiologic relevance compared to traditional two-dimensional cultures, providing critical insights into neurodevelopment and disease [13]. Researchers see them as a bridge between in vitro studies and human-like brain function [4,5]. This has led to the emergence of organoid intelligence (OI) [6], a multidisciplinary field defined by Smirnova et al. as the development of biological computing using 3D brain organoids and brain-machine interface technologies. OI is part of a broader shift toward Biological Computing [7], which utilizes biological molecules or systems for computational tasks. Specific implementations, such as Brainoware, embed brain organoids into computing circuits to perform tasks like speech recognition [8]. This integration is usually mediated by a brain-computer interface (BCI), which is a system that detects brain activity and translates it into instructions for external devices [9]. This evolution moves from “Human/Machine” to “Bio-Algorithmic Hybrids,” including OI, BCIs, and Brainoware.

However, increasing organoid complexity escalates ethical controversies, particularly regarding the possibility of “consciousness” and the associated question of their “moral status” [10,11]. Despite lacking empirical evidence, this potential alone introduces a core philosophical and ethical challenge: what are the criteria for moral status? If neural tissue can experience pain, pleasure, or will, it becomes a “morally relevant entity,” which fundamentally challenges current experimental designs, sourcing, usage, and disposal [12,13].

Significantly, this rapid scientific progress has created a significant “governance gap”—a chasm between the capabilities of the technology and the adequacy of existing oversight mechanisms. In light of these challenges, this paper systematically reviews the ethical issues and governance needs arising from current cerebral organoid and OI research. We propose a conceptual framework containing four evolutionary layers of ethical concerns, corresponding to increasing levels of organoid complexity and integration, to support the systematic understanding, prudent judgment, and governance construction of the ethical boundaries of cerebral organoids and OI. Throughout our discussion, we advocate shifting from passive, ad hoc regulation to active, forward-looking governance to keep pace with technological advancements. While reviewing current scientific and ethical consensus, this paper also focuses on key controversies and unresolved issues, aiming to provide a constructive reference for the growing interdisciplinary dialog and institutional responses in this field.

2 Ethical challenges and governance pathways

2.1 Redefining foundational ethical issues

The increasing structural and functional refinement of cerebral organoid technology has led to continuous convergence with the morphological similarity and physiological activity of the natural human brain. Even before organoids enter discussions related to consciousness, their construction and application processes have already exposed ethical ambiguities. For example, does the legal acquisition of donor cells truly reflect the principle of informed voluntary consent? How should researchers’ rights to use cell-derived materials and data be defined? After experiments, can biologically active organoid materials be arbitrarily disposed of? These questions constitute the ethical baseline for the initial stages of research practice.

The legitimacy of donor cell donation systems is the primary checkpoint for ethical review and research design. Although cerebral organoids are mainly derived from induced pluripotent stem cells (iPSCs) or embryonic stem cells (ESCs), this does not mean donor wishes can be ignored or generalized. Existing studies indicate that the traditional “broad consent” model is increasingly ineffective in cerebral organoid research, as donors may not anticipate their cells being used to simulate human brain development, constructing 3D neural networks, or even frontier applications like biological computing [14]. Particularly in studies simulating neuropsychiatric diseases or performing brain-like functional tasks, donors’ ethical cognition thresholds are significantly higher than those for other biological sample donations [15]. Therefore, informed consent should evolve from “broad” to “specific-use and dynamic informed consent” to more accurately respond to donors’ rights to perceive and choose usage contexts [11,16]. This shift from “broad” to “dynamic” consent is not merely theoretical [17]. In fields like genomics and biobanking, platforms such as the UK’s EnCoRe Project [18] and blockchain-based systems like ConsentChain have been developed to provide donors with ongoing, granular control over the use of their data and samples [19]. However, these models also introduce challenges, such as the risk of “consent fatigue” and the digital divide [20,21], which must be considered when designing similar systems for organoid research.

Another critical issue is the continuity of donor rights. Current research often treats cell donation as a one-time act, neglecting donor rights to information, data, and future commercialization [12]. This is sensitive in data-driven organoid modeling and artificial intelligence (AI) training. If donor cells contribute to drug screening, neural networks, or AI training, do donors have rights to information, dissent, or compensation? Some suggest donors are “co-creators of information generation” [13], advocating for “responsible associated data generation governance” to prevent institutional monopolies in tissue reuse [22].

Defining tissue usage rights also requires re-evaluation. Traditionally, separated cellular sources belong to institutions. However, when 3D organoids grow in vitro, showing brain-like features and electrophysiological activity, their legal and ethical status differs from that of ordinary experimental materials [23]. With evolving 3D culture and increasing complexity, organoid “ownership” and “disposal rights” become complex [24]. This ambiguity demands reevaluating donor, researcher, and funder roles. The post-experiment organoid disposal is challenging. If organoids show higher structural complexity or primitive neural activity, is “destruction” ethically equivalent to ordinary materials? Some suggest a “brain-like material usage endpoint assessment mechanism” for further experiments, special review, or ethical transition after functional thresholds [25]. Others propose mandatory graded disposal based on “functional residual assessment” to prevent the circulation of sensitive materials within a regulatory vacuum [26]. To translate these concerns into practice, we propose that research institutions and ethics committees adopt a “Graduated Framework for Material Disposition.” This framework would require: (1) classification of organoid research based on potential for complex neural activity; (2) establishment of clear, pre-defined functional markers (e.g., specific oscillatory patterns, long-range synchronized firing) that trigger mandatory ethical review before continuation or disposal; and (3) specific protocols for the disposal of functionally complex organoids that treat them as sensitive biological materials rather than standard laboratory waste. Such a mechanism is essential for responsible governance in the absence of international consensus. These issues are intertwined: consent affects use rights, functional evolution triggers donor re-evaluation, requiring dynamic consent. Material disposal ethics are linked to early donor expectations. Informing donors about neural activity research increases institutional obligations and accountability during organoid destruction [12,14].

2.2 Proactive governance of core ethical controversies

The debate over consciousness in cerebral organoids is profoundly hampered by the lack of a consensus operational definition of consciousness itself in both science and philosophy. Therefore, ethical governance cannot wait for a definitive test. Instead, it must focus on identifiable markers of neural complexity and function that may serve as proxies for potential consciousness. Our discussion, therefore, centers on these measurable indicators and the ethical stances they necessitate. Early studies have observed synchronized electrical activity [27], stimulus-modulated neural oscillations [2830], structural developments mimicking complex circuits [3133], and improved neural maturity [34,35]. Information processing shows recursive dynamics [36,37]. While consciousness has not been proven, these advances approach assessing “potential conscious capabilities,” necessitating preventive governance for consciousness potential and moral status.

The core ethical debate centers on organoid consciousness potential, challenging traditional moral status and consciousness theories. Defining consciousness without subjective reports relies on inferences from neuroactivity and structural-functional integration. Electrophysiological activity, like synchronized rhythms similar to neonatal electroencephalograms (EEGs) [29,30], is used to infer conscious states, but this approach assumes “activity similarity equals consciousness potential,” which is fallacious without understanding brain region functions [31]. Structural and network complexity also inform models; some argue organoids need cross-regional connectivity for “potentially conscious entities” [32,34], while others note fundamental differences from human brains [33]. The debate also questions if consciousness is modular [35,38], with critics warning against overestimating organoid cognitive abilities [39]. Consciousness is a multi-dimensional emergent property, not just electrical activity or structural complexity, making ethical assessment uncertain and supporting a “preventive hypothesis” [10,40].

The core question that follows is: to what extent do these measurable functional markers constitute sufficient or necessary conditions for granting moral status to cerebral organoids? A mainstream position adheres to the consciousness-centered tradition, arguing that cerebral organoids can only obtain direct moral consideration when they exhibit recognizable conscious phenomena. Some argue consciousness potential is a threshold, not a guarantee of rights [10,30,32,38]. Bayne et al.’s “island of consciousness” theory suggests local conscious fragments have ethical significance [36]. Critics question consciousness-centrism, advocating for “sentience potential” or “relational status” [33,35], or symbolic moral status from human embedding [34,40]. The key difficulty in the interaction between moral status and behavioral rights lies in distinguishing the scope of “possessing moral status” and “enjoying behavioral rights.” Lavazza (2021) notes that the establishment of moral status does not automatically lead to a complete system of behavioral rights, especially when rights-holders lack the capabilities for self-expression; this connection is easily broken [38]. A gradualist approach suggests consciousness justifies “minimum moral treatment obligations,” including harm control [2931,33].

In light of this profound scientific uncertainty, we advocate for the adoption of a precautionary approach. This principle holds that where there is a plausible, though unproven, risk of a morally significant outcome—such as the emergence of sentience—the burden of proof lies with demonstrating its absence. Until such evidence is available, research should proceed with caution, incorporating enhanced ethical oversight and measures to minimize potential harm. This stance, supported by bodies such as the Nuffield Council on Bioethics, prioritizes ethical foresight over waiting for empirical certainty, which may arrive too late [41].

2.3 Cross-boundary ethical risks and governance strategies

As cerebral organoid technology advances, its integration with interspecies chimeras, AI, and BCIs reshapes its scientific and ethical implications. This fusion creates new embodied systems and challenges norms of “neural identity,” “moral status,” and “subject boundaries.”

Current research has demonstrated that in interspecies chimera research, human organoids transplanted into mouse brains can survive and functionally integrate, with their electrophysiological patterns assimilating into the host systems [40,42]. This biological integration not only provides disease models but also prompts reflection on extending human nervous system boundaries, cross-species ethics, and “other chimerism.”

Regarding AI integration, current research has made progress in building adaptive biological computing systems. Brainoware, for example, is a demonstrated implementation that embeds brain-like tissues into neurointerface circuits, showing unique responses and evolutionary capabilities during training [8]. Organoid neurons adapt to electrical stimulation, suggesting potential for embodied cognitive modeling and neuro-inspired intelligence [7].

BCI intervention further evolves organoids into interactive systems. Researchers have connected organoids to external sensors, creating basic interactive closed-loop systems via neural activity feedback to explore their dynamic responses and behavioral shaping [28]. This blurs “passive recording” and “active regulation,” laying the groundwork for “cognition-like interface” systems with memory plasticity and environmental adaptability [43].

Forward-looking scenarios envision that multi-path fusion will blur the boundaries between the natural and the artificial, and between life and algorithm. Currently, research teams aim for high synergy between organoids and artificial systems via neural simulators, feedback loops, and modular designs [8]. The speculative potential of such integration includes a scenario where, as self-organizing neural networks align with AI algorithms, ethical risks from potential consciousness grow, demanding a shift from single-technology assessment to systemic, multidisciplinary governance [31,38].

Organoid research merged with chimera technology creates a cross-species ethical crossroads. The core issue isn’t just a technological breakthrough but the reconstruction ethical boundaries and moral identities. Koplin and Massie argue that if brain-like structures achieve human-like functional integration, they become “human-like agents” in non-human hosts [44]. This embedding isn’t just physical; it impacts animal subjectivity and human dignity [45]. The ethical evaluation of chimeric animals faces institutional tension: science needs advanced models and forces a re-evaluation of moral status for interspecies entities [46].

Debates on “moral status” have led to two paths: one that takes cognitive ability as a functional threshold [47,48], and the other that adopts contextualism—embedding ethics in specific experiments to avoid overextrapolating abstract personality standards [49]. The former seeks objective moral evaluation, while the latter emphasizes social acceptance and psychological reactions [50].

A deeper issue is differing views on species boundaries. Cognitive functionalists weaken species boundaries, reconstructing ethics based on “capabilities.” Contextual empiricists focus on the “non-self is other” separation in human emotional structures [51]. These paths create ethical judgment opposition and reveal potential ruptures in modern life science’s ethical subjectification. Normatively, “cross-species ethical deliberation frameworks” are proposed to mitigate the technology-ethics tension via institutionalized chimera ethics committees, contextual standards, and early warning systems [45,47]. This institutional exploration shows organoid-chimera research needs new ethical logic along cross-species and cross-system dimensions.

OI, integrating human brain organoids and AI, moves beyond traditional “bio-simulated AI” or “AI-assisted bio-experiments” to a deeply nested bioartificial structure [52]. Ethical issues shift from single-entity norms to responsibility and value judgment in system coupling. Moral agency asymmetry is evident. OI systems, built from human iPSCs, gain cognitive and learning abilities through external AI manipulation [53]. OI is a “weak agent,” but this doesn’t fully resolve the ethical tension between its autonomy and manipulative dependence [54]. When neural organism feedback couples with algorithmic training, human-machine control becomes intertwined—this not only obscures ethical dominance [55] but also creates a novel ambiguity in moral status. In turn, this positions OI in a liminal space between a mere tool and a potential subject. As a system with neural plasticity and algorithmic adaptability, OI is neither traditional AI nor fully human. Its human-like perception, memory, and learning intent place it in an ambiguous “human-like but not human” zone, beyond the “tool-subject” dichotomy [44]. When OI integrates with AI training or BCI platforms, ethical impacts transcend single structures. Neural feedback optimizing algorithms may improve efficiency but also instrumentalize neural tissue in information flow, creating “functional demands” or “manipulative implantation” for brain-like structures [56]. This bio-signal interaction reshapes cognitive pathways and ethically redefines behavioral responsibility and value attribution in technological systems.

Overall, OI-AI fusion reveals an emerging ethical co-construction mechanism, shifting from single-subject norms to cross-system synergy as the basis for moral reflection. Concepts like consciousness, autonomy, control, and moral rights need interdisciplinary reconstruction to address bio-algorithmic hybrid intelligence complexities. The convergence of cerebral organoids and BCIs pushes human neural activity reproduction, expansion, and externalization to a new stage. The “human-machine boundary” becomes dynamic, plastic, and programmable. Ethical issues arise from deep entanglement in cognitive control, subject representation, and information attribution [44]. Organoid technology, simulating human cortical structures with perception and learning potential, as a “biological intermediary” in BCI systems, blurs human intent and organoid computation [46]. “Neural intermediaries” in chimeras are neither pure brain extensions nor traditional algorithms [50], impacting responsibility, will, and ethical agency [45]. From an information ethics perspective, BCI-connected the brain-like structures form a “humanized processing unit” with neural data source, processing, and feedback. Privacy protection and intervention exemption become ethical and legal disputes [47,51]. Cross-domain integration may also challenge human self-consistency and irreplaceability, as hybrid cognitive output blurs human boundaries and ontology.

Following the integration trajectory of organoids and BCIs, ethical debates have evolved from narrow behavioral rules to the fundamental reshaping of system architecture and the moral agency boundaries. As cerebral organoids evolve into increasingly interactive hybrid intelligent systems, their accompanying ethical challenges have become layered and multidimensional. Most existing research still examines isolated technical scenarios, focusing on discrete dilemmas. However, increasingly close technical cross-integration now demands a structured, layered framework capable of depicting how different issues interrelate and evolve over time. To meet this need, we synthesize recent academic research on organoids and related neurotechnologies, proposing a four-tier concentric circle framework for ethical concerns (Fig. 1). This framework distinguishes foundational, core, risk, and governance layers according to their typical order of emergence and increasing governance complexity. Foundational layer: preconditional obligations such as informed consent, donor rights, and material management that enable any organoid research. Core layer: central controversies surrounding neural function, emergent consciousness, and cognitive capabilities. Risk layer: new challenges from xenogeneic chimeras, OI, and BCI coupling, marking the frontier of ethical uncertainty under technological convergence. Governance layer: social, legal, and policy responses, including guideline development, public communication, regulatory tools, and policy design. This framework integrates the technology-ethical issues discussed earlier, provides a coherent scaffold for subsequent governance mechanism analysis, and offers vertical (layered) and horizontal (cross-domain) comprehensive perspectives for future ethical assessment.

2.4 Current governance frameworks and future trends

As cerebral organoid and OI technologies advance beyond the cellular levels to the simulation of complex neural and cognitive function, existing ethical regulatory frameworks face applicability challenges. Technically, brain-like structures’ self-organization and enhanced synaptic activity push them beyond “passive tissues” into a gray area of potential perception, autonomy, or consciousness [57,58]. Cognitively, public sensitivity to “human-likeness” has increased, extending ethical concerns to the transparency and controllable terminal functions [59,60]. Institutionally, current ethical frameworks, based on ESCs, animal experiments, or data governance, lack dynamic response mechanisms for evolving brain-like structures and cross-domain integration [61,62] (e.g., AI interaction, interspecies chimeras, neural simulation).

This paper compares ethical guidelines from key international and national bodies, including the Organisation for Economic Co-operation and Development (OECD), the International Society for Stem Cell Research (ISSCR), the US National Institutes of Health (NIH), China’s National Science and Technology Ethics Committee, and the UK’s Nuffield Council on Bioethics. OECD’s “Recommendation on Responsible Innovation in Neurotechnology” emphasizes proactive responsibility and cross-sectoral coordination, embedding technology governance into social governance, focusing on identity, privacy, and human rights [57,63]. ISSCR’s “Guidelines for Stem Cell Research and Clinical Translation” (2021) sets ethical boundaries for organoid research, requiring stricter review for enhanced electrophysiological activity or potential consciousness [64]. Its implementation relies on research institutions’ self-discipline, raising questions about “soft law ethical constraints” [58]. NIH’s framework is more conservative, anchored in traditional human material use norms. Its “Guidelines for Human Stem Cell Research” cover organoids but are cautious on neural complexity and moral status, focusing on informed consent compliance rather than sophisticated assessment tools for potential consciousness [12,65]. This “preventive ethical compliance” model may struggle with emerging brain-like models [61,62].

Differences in institutional design reflect varying responses to ethical pressures: the OECD emphasizes coordination and public responsibility, the ISSCR focuses on self-discipline and graded management, and the NIH prioritizes legal procedures and individual rights. This divergence shows that global ethical governance lacks technological consensus and normative synergy [66]. China’s “Ethical Guidelines for Human-Derived Organoid Research” was released by the National Science and Technology Ethics Committee [67]. Unlike the ISSCR’s self-discipline model, China’s guideline mandates ethics committee review, classifies organoid research risks, and proposes early warning for consciousness features. It aims to prevent ethical conflicts by pre-setting boundaries, emphasizing traceability and cross-border cooperation, and forming a closed-loop regulatory model [68]. This reflects an administrative compliance-centered state governance logic and strengthens national discourse in ambiguous ethical areas. Distinct from these regulatory or self-regulatory bodies, the UK’s Nuffield Council on Bioethics (NCB) acts as an independent advisory institution. Its policy briefing on neural organoids does not set rules but instead identifies emerging ethical challenges and urges the development of functional “markers” for assessment, thereby shaping the ethical discourse and providing guidance for future policy-making [69]. While the ethical guidelines of the ISSCR, OECD, NIH, China, and UK’s Nuffield Council on Bioethics have developed relatively independent governance logics within their respective institutional contexts, they all face certain dilemmas in terms of the implementation of ethical practices, the definition of functional boundaries, and the clarity of implementation mechanisms. To clarify the distinctions among these governance approaches, Table 1 provides a detailed comparison. In essence, these frameworks represent a spectrum of governance philosophies: the ISSCR guidelines embody a model of scientific self-regulation driven by the research community; the OECD recommendations promote a transnational policy logic focused on responsible innovation and societal values; the NIH framework reflects a more conservative, compliance-driven approach tied to federal funding in the US; and China’s guidelines signal a state-led, top-downregulatory model emphasizing clear boundaries and national oversight. Adding a distinct perspective, the Nuffield Council on Bioethics provides forward-looking ethical analysis intended to inform and guide future policy.

Future governance needs stronger interdisciplinary cooperation, proactive ethical review, and public participation. The core tension in organoid and OI ethics isn’t a lack of principles but a structural disconnect between judgment criteria and governance logic. Rapid organoid evolution creates an urgent need to redefine the moral status based on “sentience-like” or “consciousness-like” capabilities. Some ethicists propose “gradual empowerment” based on organizational level and functional indicators [13], while others strictly limit organoids to a “non-subject” status, emphasizing human-centered ethics [70].

Governance logic diverges from “soft law ethics” to “mandatory regulation.” The OECD and ISSCR advocate for transnational cooperation and self-discipline [63,64]. Some countries have implemented legal regulation, incorporating “brain-like activity thresholds” into ethical review [67]. These differences reflect cultural understandings of “life” and “human dignity,” national interests, technology strategies, and public expectations [51,71]. Ethical judgment tends to be proactive and risk-averse, while policy governance focuses on measurable outcomes and implementability. This objective function inconsistency creates structural tension in risk boundaries, technology tolerance, and public participation [72]. Reconciling these logics under uncertainty is a key challenge for organoid ethical governance.

3 Conclusion and future outlook

Cerebral organoids and the burgeoning field of OI represent a paradigm shift in neuroscience and computing, yet they also push existing ethical and regulatory systems to their limits. This paper has systematically analyzed the multi-layered ethical challenges, ranging from foundational issues of consent and disposal to profound controversies surrounding potential consciousness and the risks of technological convergence. We have highlighted a critical “governance gap” between the pace of scientific innovation and the development of adequate oversight. The central argument of this paper is that a fundamental shift is required: from reactive, compliance-based regulation to a model of proactive, adaptive governance.

To bridge this gap and foster responsible innovation, we propose the following concrete, actionable recommendations. (1) Establishment of specialized ethics committees: research institutions should form specialized Neuro-Organoid Ethics Committees (NOECs), composed of interdisciplinary experts in neuroscience, ethics, law, and computer science, to provide tailored oversight for this unique research area. (2) Definition of functional thresholds: international bodies, such as the ISSCR, must collaborate with neuroscientists and ethicists to define minimal functional thresholds (e.g., specific EEG-like patterns, evidence of long-range network synchrony) that would automatically trigger a higher level of ethical scrutiny and review. (3) Fostering international collaborative governance: a dedicated international collaborative platform, perhaps under the aegis of the OECD or the World Health Organization (WHO), should be established to harmonize ethical baselines, share best practices in regulation, and prevent a “race to the bottom” in ethical standards.

Looking ahead, the trajectory of this technology points toward increasingly complex scenarios that will further challenge our ethical and legal frameworks. Future technological developments may include the creation of more sophisticated assembloids [74] with sensory inputs, or even the deployment of organoids on platforms like the International Space Station for microgravity research, which will raise novel jurisdictional questions for regulatory oversight [75]. Who, for instance, would regulate an organoid with advanced functions in orbit?

Consequently, future governance must be dynamic. We anticipate a move toward “adaptive governance,” where policies are not static but are designed to be reviewed and updated on a fixed schedule (e.g., every two to three years) in response to key technological milestones. This approach embraces uncertainty and builds institutional resilience. Ultimately, the ethical journey with cerebral organoids is not about finding final answers but about building robust processes for continuous dialog, reflection, and negotiation. As we explore the frontiers of biological intelligence, we are simultaneously compelled to redefine the ethical boundaries of what it means to be both “human” and “intelligent.” Ensuring this exploration is guided by wisdom and foresight is the paramount challenge for scientists, ethicists, and society as a whole.

References

[1]

Bitar M , Barry G . Building a Human Brain for Research. Front Mol Neurosci 2020; 13(22): 22

[2]

Bi FC , Yang XH , Cheng XY , Deng WB , Guo XL , Yang H , Wang Y , Li J , Yao Y . Optimization of cerebral organoids: a more qualified model for Alzheimer’s disease research. Transl Neurodegener 2021; 10(1): 27

[3]

Lancaster MA , Knoblich JA . Generation of cerebral organoids from human pluripotent stem cells. Nat Protoc 2014; 9(10): 2329–2340

[4]

Blue R , Miranda SP , Gu BJ , Chen HI . A primer on human brain organoids for the neurosurgeon. Neurosurgery 2020; 87(4): 620–629

[5]

Giandomenico SL , Lancaster MA . Probing human brain evolution and development in organoids. Curr Opin Cell Biol 2017; 44: 36–43

[6]

Smirnova L , Morales Pantoja IE , Hartung T . Organoid intelligence (OI)—the ultimate functionality of a brain microphysiological system. ALTEX 2023; 40(2): 191–203

[7]

Smirnova L , Caffo BS , Gracias DH , Huang Q , Morales Pantoja IE , Tang B , Zack DJ , Berlinicke CA , Boyd JL , Harris TD , Johnson EC , Kagan BJ , Kahn J , Muotri AR , Paulhamus BL , Schwamborn JC , Plotkin J , Szalay AS , Vogelstein JT , Worley PF , Hartung T . Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish. Front Sci 2023; 1: 1017235

[8]

Cai H , Ao Z , Tian C , Wu Z , Liu H , Tchieu J , Gu M , Mackie K , Guo F . Brain organoid reservoir computing for artificial intelligence. Nat Electron 2023; 6(12): 1032–1039

[9]

Gao X , Wang Y , Chen X , Gao S . Interface, interaction, and intelligence in generalized brain-computer interfaces. Trends Cogn Sci 2021; 25(8): 671–684

[10]

Lavazza A , Reichlin M . Human brain organoids: why there can be moral concerns if they grow up in the lab and are transplanted or destroyed. Camb Q Healthc Ethics 2023; 32(4): 582–596

[11]

McKeown A . Cerebral organoid research ethics and pinning the tail on the donkey. Camb Q Healthc Ethics 2023; 32(4): 1–13

[12]

Kataoka M , Gyngell C , Savulescu J , Sawai T . The donation of human biological material for brain organoid research: The problems of consciousness and consent. Sci Eng Ethics 2024; 30(1): 3

[13]

Lavazza A , Chinaia AA . Human cerebral organoids: the ethical stance of scientists. Stem Cell Res Ther 2023; 14(1): 59

[14]

Lavazza A . What (or sometimes who) are organoids? And whose are they. J Med Ethics 2019; 45(2): 144–145

[15]

Gulimiheranmu M , Li S , Zhou J . In vitro recapitulation of neuropsychiatric disorders with pluripotent stem cells-derived brain organoids. Int J Environ Res Public Health 2021; 18(23): 12431

[16]

Wadan AS . Organoid intelligence and biocomputing advances: current steps and future directions. Brain Organoid Syst Neurosci J. 2025; 3: 8–14

[17]

Kaye J , Whitley EA , Lund D , Morrison M , Teare H , Melham K . Dynamic consent: a patient interface for twenty-first century research networks. Eur J Hum Genet 2015; 23(2): 141–146

[18]

Agrafiotis I Creese S Goldsmith M Formalising requirements for a biobank case study using a logic for consent and revocation. In: Camenisch J, Crispo B, Fischer-Hübner S, Leenes R, Russello G. Privacy and Identity Management for Life. Privacy and Identity 2011. IFIP Advances in Information and Communication Technology, vol 375. Springer, Berlin, Heidelberg. 232–244

[19]

Mamo N , Martin GM , Desira M , Ellul B , Ebejer JP . Dwarna: a blockchain solution for dynamic consent in biobanking. Eur J Hum Genet 2020; 28(5): 609–626

[20]

Budin-Ljøsne I , Teare HJ , Kaye J , Beck S , Bentzen HB , Caenazzo L , Collett C , D’Abramo F , Felzmann H , Finlay T , Javaid MK , Jones E , Katić V , Simpson A , Mascalzoni D . Dynamic consent: a potential solution to some of the challenges of modern biomedical research. BMC Med Ethics 2017; 18(1): 4

[21]

Prictor M , Teare HJA , Kaye J . Equitable participation in biobanks: the risks and benefits of a “dynamic consent” approach. Front Public Health 2018; 6: 253

[22]

Li S , Wang M , Zhou J . Brain organoids: a promising living biobank resource for neuroscience research. Biopreserv Biobank 2020; 18(2): 136–143

[23]

Munsie M , Hyun I , Sugarman J . Ethical issues in human organoid and gastruloid research. Development 2017; 144(6): 942–945

[24]

Urzì O , Gasparro R , Costanzo E , De Luca A , Giavaresi G , Fontana S , Alessandro R . Three-dimensional cell cultures: the bridge between in vitro and in vivo models. Int J Mol Sci 2023; 24(15): 12046

[25]

Kataoka MTS . The ethical and legal challenges of human foetal brain tissue-derived organoids: at the intersection of science, ethics, and regulation. EMBO Rep 2024; 25(4): 1700–1703

[26]

Prasad M , Kumar R , Buragohain L , Kumari A , Ghosh M . Organoid technology: a reliable developmental biology tool for organ-specific nanotoxicity evaluation. Front Cell Dev Biol 2021; 9: 696668

[27]

Lancaster MA , Renner M , Martin CA , Wenzel D , Bicknell LS , Hurles ME , Homfray T , Penninger JM , Jackson AP , Knoblich JA . Cerebral organoids model human brain development and microcephaly. Nature 2013; 501(7467): 373–379

[28]

Fair SR , Julian D , Hartlaub AM , Pusuluri ST , Malik G , Summerfied TL , Zhao G , Hester AB , Ackerman WE IV , Hollingsworth EW , Ali M , McElroy CA , Buhimschi IA , Imitola J , Maitre NL , Bedrosian TA , Hester ME . Electrophysiological maturation of cerebral organoids correlates with dynamic morphological and cellular development. Stem Cell Reports 2020; 15(4): 855–868

[29]

Diner S . Potential consciousness of human cerebral organoids: on similarity-based views in precautionary discourse. Neuroethics 2023; 16: 23

[30]

Niikawa T , Hayashi Y , Shepherd J , Sawai T . Human brain organoids and consciousness. Neuroethics 2022; 15: 5

[31]

Jeziorski E , J L , Brandt C , R AR . Brain organoids, consciousness, ethics and moral status. Brain organoids, consciousness, ethics and moral status. Semin Cell Dev Biol 2023; 144: 97–102

[32]

Montoya I , Montoya D . What is it like to be a brain organoid? phenomenal consciousness in a biological neural network. Entropy (Basel) 2023; 25(9): 1328

[33]

Koplin JJ . Weighing the moral status of brain organoids and research animals. Bioethics 2024; 38(5): 410–418

[34]

Hayashi Y , Sato R . The unity of consciousness and the practical ethics of neural organoid research. Neuroethics 2025; 18: 3

[35]

Boyd JL , Lipshitz N . Dimensions of consciousness and the moral status of brain organoids. Neuroethics 2024; 17: 5

[36]

Bayne T , Seth AK , Massimini M . Are there islands of awareness. Trends Neurosci 2020; 43(1): 6–16

[37]

Arnason G , Pichl A , Ranisch R . Ethical issues in cerebral organoid research. Camb Q Healthc Ethics 2023; 32(4): 515–517

[38]

Lavazza A. . Potential ethical problems with human cerebral organoids: consciousness and moral status of future brains in a dish. Brain Res 2021; 1750: 147146

[39]

Owen M , Hight D , Hudetz AG . Human brain organoids and the mereological fallacy. Neuroethics 2025; 18(1): 8

[40]

Sawai T , Sakaguchi H , Thomas E , Takahashi J , Fujita M . The ethics of cerebral organoid research: being conscious of consciousness. Stem Cell Reports 2019; 13(3): 440–447

[41]

Nuffield Council on Bioethics. Neural organoids in research: ethical considerations [Policy briefing]. 2024

[42]

Revah O , Gore F , Kelley KW , Andersen J , Sakai N , Chen X , Li MY , Birey F , Yang X , Saw NL , Baker SW , Amin ND , Kulkarni S , Mudipalli R , Cui B , Nishino S , Grant GA , Knowles JK , Shamloo M , Huguenard JR , Deisseroth K , Pașca SP . Maturation and circuit integration of transplanted human cortical organoids. Nature 2022; 610(7930): 319–326

[43]

Trujillo CAGao RNegraes PDGu JBuchanan JPreissl SWang AWu WHaddad GGChaim IADomissy AVandenberghe MDevor AYeo GWVoytek BMuotri AR. Complex oscillatory waves emerging from cortical organoids model early human brain network development. Cell Stem Cell 2019; 25(4): 558–569.e7

[44]

Koplin J , Massie J . Lessons from Frankenstein 200 years on: brain organoids, chimaeras and other ‘monsters’. J Med Ethics 2021; 47(8): 567–571

[45]

Barnhart AJ , Dierickx K . A tale of two chimeras: applying the six principles to human brain organoid xenotransplantation. Camb Q Healthc Ethics 2023; 32(4): 555–571

[46]

Erler A . Human brain organoid transplantation: testing the foundations of animal research ethics. Neuroethics 2024; 17: 20

[47]

Kataoka M , Gyngell C , Savulescu J , Sawai T . The ethics of human brain organoid transplantation in animals. Neuroethics 2023; 16: 27

[48]

Tanibe T , Watanabe T , Oguchi M , Iijima K , Ota K . The psychological process underlying attitudes toward human-animal chimeric brain research: an empirical investigation. Neuroethics 2024; 17: 15

[49]

Shriver AJ , John TM . Neuroethics and animals: report and recommendations from the university of pennsylvania animal research neuroethics workshop. ILAR J 2019; 60(3): 424–433

[50]

Savulescu J , Sawai T . Animus: human-embodied animals. J Med Ethics 2024; 50(11): 725–728

[51]

Chen HI , Wolf JA , Blue R , Song MM , Moreno JD , Ming G , Song H . Transplantation of human brain organoids: Revisiting the science and ethics of brain chimeras. Cell Stem Cell 2019; 25(4): 462–472

[52]

Kagan BJ , Kitchen AC , Tran NT , Habibollahi F , Khajehnejad M , Parker BJ , Bhat A , Rollo B , Razi A , Friston KJ . In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 2022; 110(23): 3952–3969.e8

[53]

Haselager DR , Boers SN , Jongsma KR , Vinkers CH , Broekman ML , Bredenoord AL . Breeding brains? Patients’ and laymen’s perspectives on cerebral organoids. Regen Med 2020; 15(12): 2351–2360

[54]

Milford SR , Shaw D , Starke G . Playing brains: the ethical challenges posed by silicon sentience and hybrid intelligence in dishbrain. Sci Eng Ethics 2023; 29(6): 38

[55]

Wu X , Chen Y , Kreutz A , Silver B , Tokar EJ . Pluripotent stem cells for target organ developmental toxicity testing. Toxicol Sci 2024; 199(2): 163–171

[56]

Ballav SRanjan ASur SBasu S Organoid intelligence: bridging artificial intelligence for biological computing and neurological insights. In: Basu S, Ranjan A, Sur S. Technologies in Cell Culture—A Journey from Basics to Advanced Applications. IntechOpen, 2024

[57]

Presley A , Samsa LA , Dubljević V . Media portrayal of ethical and social issues in brain organoid research. Philos Ethics Humanit Med 2022; 17: 8

[58]

Kataoka M , Lee TL , Sawai T . The legal personhood of human brain organoids. J Law Biosci 2023; 10(1): lsad007

[59]

Ide K , Matsuoka N , Fujita M . Ethical aspects of brain organoid research in news reports: an exploratory descriptive analysis. Medicina (Kaunas) 2021; 57(6): 532

[60]

Villanueva II , Eom D , Cate AR , Krause NM , Scheufele DA , Brossard D . Emerging debates about breakthrough science: Understanding the interplay of values and cognition in shaping attitudes on human brain organoids. Sci Commun 2025; 47(5): 599–632

[61]

Julian K , Yuhasz N , Hollingsworth E , Imitola J . The “growing” reality of the neurological complications of global “stem cell tourism”. Semin Neurol 2018; 38(2): 176–181

[62]

Wolff H . Patentability of brain organoids derived from iPSC—a legal evaluation with interdisciplinary aspects. Neuroethics 2024; 17: 7

[63]

Organisation for Economic Co-operationDevelopment . Recommendation of the council on responsible innovation in neurotechnology. OECD Publishing. 2019

[64]

ISSCR . ISSCR Guidelines for Stem Cell Research and Clinical Translation. 2021

[65]

Jongsma KR , Bredenoord AL . Ethics parallel research: an approach for (early) ethical guidance of biomedical innovation. BMC Med Ethics 2020; 21(1): 81

[66]

Blumenthal-Barby J . Rethinking theory in bioethics. Hastings Cent Rep 2022; 52(4): 44–45

[67]

China Ethical Guidelines for Human Organoid Research. 2025

[68]

Han YP , Fan LL , Xue Y . A sustainable balance between innovation and risk: how the “right to science” affects China’s medical biotechnology regulatory policy. Comput Struct Biotechnol J 2024; 24: 306–313

[69]

Nuffield Council on Bioethics. Research using neural organoids: ethical considerations [Policy briefing note]. 2024

[70]

Farahany NA , Greely HT , Hyman S , Koch C , Grady C , Pașca SP , Sestan N , Arlotta P , Bernat JL , Ting J , Lunshof JE , Iyer EPR , Hyun I , Capestany BH , Church GM , Huang H , Song H . The ethics of experimenting with human brain tissue. Nature 2018; 556(7702): 429–432

[71]

Oh SA , Kim EY . Organoid global regulatory policy: a cross-sectional study. Drug Targets and Therapeutics. 2024; 3(2): 169–176

[72]

Ersdal G , Aven T . Risk informed decision-making and its ethical basis. Reliab Eng Syst Saf 2008; 93(2): 197–205

[73]

National Institutes of Health. NIH guidelines on human stem cell research. U. S. Department of Health and Human Services. 2009

[74]

Miura Y , Li MY , Revah O , Yoon SJ , Narazaki G , Pașca SP . Engineering brain assembloids to interrogate human neural circuits. Nat Protoc 2022; 17(1): 15–35

[75]

Qian X , Song H , Ming GL . Brain organoids: advances, applications and challenges. Development 2019; 146(8): dev166074

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (714KB)

474

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/