1 Introduction
Cerebral organoids, three-dimensional (3D) brain models derived from human pluripotent stem cells, have become indispensable tools in neuroscience. By recapitulating aspects of embryonic brain development in vitro, they offer superior physiologic relevance compared to traditional two-dimensional cultures, providing critical insights into neurodevelopment and disease [
1–
3]. Researchers see them as a bridge between in vitro studies and human-like brain function [
4,
5]. This has led to the emergence of organoid intelligence (OI) [
6], a multidisciplinary field defined by Smirnova
et al. as the development of biological computing using 3D brain organoids and brain-machine interface technologies. OI is part of a broader shift toward Biological Computing [
7], which utilizes biological molecules or systems for computational tasks. Specific implementations, such as Brainoware, embed brain organoids into computing circuits to perform tasks like speech recognition [
8]. This integration is usually mediated by a brain-computer interface (BCI), which is a system that detects brain activity and translates it into instructions for external devices [
9]. This evolution moves from “Human/Machine” to “Bio-Algorithmic Hybrids,” including OI, BCIs, and Brainoware.
However, increasing organoid complexity escalates ethical controversies, particularly regarding the possibility of “consciousness” and the associated question of their “moral status” [
10,
11]. Despite lacking empirical evidence, this potential alone introduces a core philosophical and ethical challenge: what are the criteria for moral status? If neural tissue can experience pain, pleasure, or will, it becomes a “morally relevant entity,” which fundamentally challenges current experimental designs, sourcing, usage, and disposal [
12,
13].
Significantly, this rapid scientific progress has created a significant “governance gap”—a chasm between the capabilities of the technology and the adequacy of existing oversight mechanisms. In light of these challenges, this paper systematically reviews the ethical issues and governance needs arising from current cerebral organoid and OI research. We propose a conceptual framework containing four evolutionary layers of ethical concerns, corresponding to increasing levels of organoid complexity and integration, to support the systematic understanding, prudent judgment, and governance construction of the ethical boundaries of cerebral organoids and OI. Throughout our discussion, we advocate shifting from passive, ad hoc regulation to active, forward-looking governance to keep pace with technological advancements. While reviewing current scientific and ethical consensus, this paper also focuses on key controversies and unresolved issues, aiming to provide a constructive reference for the growing interdisciplinary dialog and institutional responses in this field.
2 Ethical challenges and governance pathways
2.1 Redefining foundational ethical issues
The increasing structural and functional refinement of cerebral organoid technology has led to continuous convergence with the morphological similarity and physiological activity of the natural human brain. Even before organoids enter discussions related to consciousness, their construction and application processes have already exposed ethical ambiguities. For example, does the legal acquisition of donor cells truly reflect the principle of informed voluntary consent? How should researchers’ rights to use cell-derived materials and data be defined? After experiments, can biologically active organoid materials be arbitrarily disposed of? These questions constitute the ethical baseline for the initial stages of research practice.
The legitimacy of donor cell donation systems is the primary checkpoint for ethical review and research design. Although cerebral organoids are mainly derived from induced pluripotent stem cells (iPSCs) or embryonic stem cells (ESCs), this does not mean donor wishes can be ignored or generalized. Existing studies indicate that the traditional “broad consent” model is increasingly ineffective in cerebral organoid research, as donors may not anticipate their cells being used to simulate human brain development, constructing 3D neural networks, or even frontier applications like biological computing [
14]. Particularly in studies simulating neuropsychiatric diseases or performing brain-like functional tasks, donors’ ethical cognition thresholds are significantly higher than those for other biological sample donations [
15]. Therefore, informed consent should evolve from “broad” to “specific-use and dynamic informed consent” to more accurately respond to donors’ rights to perceive and choose usage contexts [
11,
16]. This shift from “broad” to “dynamic” consent is not merely theoretical [
17]. In fields like genomics and biobanking, platforms such as the UK’s EnCoRe Project [
18] and blockchain-based systems like ConsentChain have been developed to provide donors with ongoing, granular control over the use of their data and samples [
19]. However, these models also introduce challenges, such as the risk of “consent fatigue” and the digital divide [
20,
21], which must be considered when designing similar systems for organoid research.
Another critical issue is the continuity of donor rights. Current research often treats cell donation as a one-time act, neglecting donor rights to information, data, and future commercialization [
12]. This is sensitive in data-driven organoid modeling and artificial intelligence (AI) training. If donor cells contribute to drug screening, neural networks, or AI training, do donors have rights to information, dissent, or compensation? Some suggest donors are “co-creators of information generation” [
13], advocating for “responsible associated data generation governance” to prevent institutional monopolies in tissue reuse [
22].
Defining tissue usage rights also requires re-evaluation. Traditionally, separated cellular sources belong to institutions. However, when 3D organoids grow
in vitro, showing brain-like features and electrophysiological activity, their legal and ethical status differs from that of ordinary experimental materials [
23]. With evolving 3D culture and increasing complexity, organoid “ownership” and “disposal rights” become complex [
24]. This ambiguity demands reevaluating donor, researcher, and funder roles. The post-experiment organoid disposal is challenging. If organoids show higher structural complexity or primitive neural activity, is “destruction” ethically equivalent to ordinary materials? Some suggest a “brain-like material usage endpoint assessment mechanism” for further experiments, special review, or ethical transition after functional thresholds [
25]. Others propose mandatory graded disposal based on “functional residual assessment” to prevent the circulation of sensitive materials within a regulatory vacuum [
26]. To translate these concerns into practice, we propose that research institutions and ethics committees adopt a “Graduated Framework for Material Disposition.” This framework would require: (1) classification of organoid research based on potential for complex neural activity; (2) establishment of clear, pre-defined functional markers (e.g., specific oscillatory patterns, long-range synchronized firing) that trigger mandatory ethical review before continuation or disposal; and (3) specific protocols for the disposal of functionally complex organoids that treat them as sensitive biological materials rather than standard laboratory waste. Such a mechanism is essential for responsible governance in the absence of international consensus. These issues are intertwined: consent affects use rights, functional evolution triggers donor re-evaluation, requiring dynamic consent. Material disposal ethics are linked to early donor expectations. Informing donors about neural activity research increases institutional obligations and accountability during organoid destruction [
12,
14].
2.2 Proactive governance of core ethical controversies
The debate over consciousness in cerebral organoids is profoundly hampered by the lack of a consensus operational definition of consciousness itself in both science and philosophy. Therefore, ethical governance cannot wait for a definitive test. Instead, it must focus on identifiable markers of neural complexity and function that may serve as proxies for potential consciousness. Our discussion, therefore, centers on these measurable indicators and the ethical stances they necessitate. Early studies have observed synchronized electrical activity [
27], stimulus-modulated neural oscillations [
28–
30], structural developments mimicking complex circuits [
31–
33], and improved neural maturity [
34,
35]. Information processing shows recursive dynamics [
36,
37]. While consciousness has not been proven, these advances approach assessing “potential conscious capabilities,” necessitating preventive governance for consciousness potential and moral status.
The core ethical debate centers on organoid consciousness potential, challenging traditional moral status and consciousness theories. Defining consciousness without subjective reports relies on inferences from neuroactivity and structural-functional integration. Electrophysiological activity, like synchronized rhythms similar to neonatal electroencephalograms (EEGs) [
29,
30], is used to infer conscious states, but this approach assumes “activity similarity equals consciousness potential,” which is fallacious without understanding brain region functions [
31]. Structural and network complexity also inform models; some argue organoids need cross-regional connectivity for “potentially conscious entities” [
32,
34], while others note fundamental differences from human brains [
33]. The debate also questions if consciousness is modular [
35,
38], with critics warning against overestimating organoid cognitive abilities [
39]. Consciousness is a multi-dimensional emergent property, not just electrical activity or structural complexity, making ethical assessment uncertain and supporting a “preventive hypothesis” [
10,
40].
The core question that follows is: to what extent do these measurable functional markers constitute sufficient or necessary conditions for granting moral status to cerebral organoids? A mainstream position adheres to the consciousness-centered tradition, arguing that cerebral organoids can only obtain direct moral consideration when they exhibit recognizable conscious phenomena. Some argue consciousness potential is a threshold, not a guarantee of rights [
10,
30,
32,
38]. Bayne
et al.’s “island of consciousness” theory suggests local conscious fragments have ethical significance [
36]. Critics question consciousness-centrism, advocating for “sentience potential” or “relational status” [
33,
35], or symbolic moral status from human embedding [
34,
40]. The key difficulty in the interaction between moral status and behavioral rights lies in distinguishing the scope of “possessing moral status” and “enjoying behavioral rights.” Lavazza (2021) notes that the establishment of moral status does not automatically lead to a complete system of behavioral rights, especially when rights-holders lack the capabilities for self-expression; this connection is easily broken [
38]. A gradualist approach suggests consciousness justifies “minimum moral treatment obligations,” including harm control [
29–
31,
33].
In light of this profound scientific uncertainty, we advocate for the adoption of a precautionary approach. This principle holds that where there is a plausible, though unproven, risk of a morally significant outcome—such as the emergence of sentience—the burden of proof lies with demonstrating its absence. Until such evidence is available, research should proceed with caution, incorporating enhanced ethical oversight and measures to minimize potential harm. This stance, supported by bodies such as the Nuffield Council on Bioethics, prioritizes ethical foresight over waiting for empirical certainty, which may arrive too late [
41].
2.3 Cross-boundary ethical risks and governance strategies
As cerebral organoid technology advances, its integration with interspecies chimeras, AI, and BCIs reshapes its scientific and ethical implications. This fusion creates new embodied systems and challenges norms of “neural identity,” “moral status,” and “subject boundaries.”
Current research has demonstrated that in interspecies chimera research, human organoids transplanted into mouse brains can survive and functionally integrate, with their electrophysiological patterns assimilating into the host systems [
40,
42]. This biological integration not only provides disease models but also prompts reflection on extending human nervous system boundaries, cross-species ethics, and “other chimerism.”
Regarding AI integration, current research has made progress in building adaptive biological computing systems. Brainoware, for example, is a demonstrated implementation that embeds brain-like tissues into neurointerface circuits, showing unique responses and evolutionary capabilities during training [
8]. Organoid neurons adapt to electrical stimulation, suggesting potential for embodied cognitive modeling and neuro-inspired intelligence [
7].
BCI intervention further evolves organoids into interactive systems. Researchers have connected organoids to external sensors, creating basic interactive closed-loop systems via neural activity feedback to explore their dynamic responses and behavioral shaping [
28]. This blurs “passive recording” and “active regulation,” laying the groundwork for “cognition-like interface” systems with memory plasticity and environmental adaptability [
43].
Forward-looking scenarios envision that multi-path fusion will blur the boundaries between the natural and the artificial, and between life and algorithm. Currently, research teams aim for high synergy between organoids and artificial systems via neural simulators, feedback loops, and modular designs [
8]. The speculative potential of such integration includes a scenario where, as self-organizing neural networks align with AI algorithms, ethical risks from potential consciousness grow, demanding a shift from single-technology assessment to systemic, multidisciplinary governance [
31,
38].
Organoid research merged with chimera technology creates a cross-species ethical crossroads. The core issue isn’t just a technological breakthrough but the reconstruction ethical boundaries and moral identities. Koplin and Massie argue that if brain-like structures achieve human-like functional integration, they become “human-like agents” in non-human hosts [
44]. This embedding isn’t just physical; it impacts animal subjectivity and human dignity [
45]. The ethical evaluation of chimeric animals faces institutional tension: science needs advanced models and forces a re-evaluation of moral status for interspecies entities [
46].
Debates on “moral status” have led to two paths: one that takes cognitive ability as a functional threshold [
47,
48], and the other that adopts contextualism—embedding ethics in specific experiments to avoid overextrapolating abstract personality standards [
49]. The former seeks objective moral evaluation, while the latter emphasizes social acceptance and psychological reactions [
50].
A deeper issue is differing views on species boundaries. Cognitive functionalists weaken species boundaries, reconstructing ethics based on “capabilities.” Contextual empiricists focus on the “non-self is other” separation in human emotional structures [
51]. These paths create ethical judgment opposition and reveal potential ruptures in modern life science’s ethical subjectification. Normatively, “cross-species ethical deliberation frameworks” are proposed to mitigate the technology-ethics tension via institutionalized chimera ethics committees, contextual standards, and early warning systems [
45,
47]. This institutional exploration shows organoid-chimera research needs new ethical logic along cross-species and cross-system dimensions.
OI, integrating human brain organoids and AI, moves beyond traditional “bio-simulated AI” or “AI-assisted bio-experiments” to a deeply nested bioartificial structure [
52]. Ethical issues shift from single-entity norms to responsibility and value judgment in system coupling. Moral agency asymmetry is evident. OI systems, built from human iPSCs, gain cognitive and learning abilities through external AI manipulation [
53]. OI is a “weak agent,” but this doesn’t fully resolve the ethical tension between its autonomy and manipulative dependence [
54]. When neural organism feedback couples with algorithmic training, human-machine control becomes intertwined—this not only obscures ethical dominance [
55] but also creates a novel ambiguity in moral status. In turn, this positions OI in a liminal space between a mere tool and a potential subject. As a system with neural plasticity and algorithmic adaptability, OI is neither traditional AI nor fully human. Its human-like perception, memory, and learning intent place it in an ambiguous “human-like but not human” zone, beyond the “tool-subject” dichotomy [
44]. When OI integrates with AI training or BCI platforms, ethical impacts transcend single structures. Neural feedback optimizing algorithms may improve efficiency but also instrumentalize neural tissue in information flow, creating “functional demands” or “manipulative implantation” for brain-like structures [
56]. This bio-signal interaction reshapes cognitive pathways and ethically redefines behavioral responsibility and value attribution in technological systems.
Overall, OI-AI fusion reveals an emerging ethical co-construction mechanism, shifting from single-subject norms to cross-system synergy as the basis for moral reflection. Concepts like consciousness, autonomy, control, and moral rights need interdisciplinary reconstruction to address bio-algorithmic hybrid intelligence complexities. The convergence of cerebral organoids and BCIs pushes human neural activity reproduction, expansion, and externalization to a new stage. The “human-machine boundary” becomes dynamic, plastic, and programmable. Ethical issues arise from deep entanglement in cognitive control, subject representation, and information attribution [
44]. Organoid technology, simulating human cortical structures with perception and learning potential, as a “biological intermediary” in BCI systems, blurs human intent and organoid computation [
46]. “Neural intermediaries” in chimeras are neither pure brain extensions nor traditional algorithms [
50], impacting responsibility, will, and ethical agency [
45]. From an information ethics perspective, BCI-connected the brain-like structures form a “humanized processing unit” with neural data source, processing, and feedback. Privacy protection and intervention exemption become ethical and legal disputes [
47,
51]. Cross-domain integration may also challenge human self-consistency and irreplaceability, as hybrid cognitive output blurs human boundaries and ontology.
Following the integration trajectory of organoids and BCIs, ethical debates have evolved from narrow behavioral rules to the fundamental reshaping of system architecture and the moral agency boundaries. As cerebral organoids evolve into increasingly interactive hybrid intelligent systems, their accompanying ethical challenges have become layered and multidimensional. Most existing research still examines isolated technical scenarios, focusing on discrete dilemmas. However, increasingly close technical cross-integration now demands a structured, layered framework capable of depicting how different issues interrelate and evolve over time. To meet this need, we synthesize recent academic research on organoids and related neurotechnologies, proposing a four-tier concentric circle framework for ethical concerns (Fig. 1). This framework distinguishes foundational, core, risk, and governance layers according to their typical order of emergence and increasing governance complexity. Foundational layer: preconditional obligations such as informed consent, donor rights, and material management that enable any organoid research. Core layer: central controversies surrounding neural function, emergent consciousness, and cognitive capabilities. Risk layer: new challenges from xenogeneic chimeras, OI, and BCI coupling, marking the frontier of ethical uncertainty under technological convergence. Governance layer: social, legal, and policy responses, including guideline development, public communication, regulatory tools, and policy design. This framework integrates the technology-ethical issues discussed earlier, provides a coherent scaffold for subsequent governance mechanism analysis, and offers vertical (layered) and horizontal (cross-domain) comprehensive perspectives for future ethical assessment.
2.4 Current governance frameworks and future trends
As cerebral organoid and OI technologies advance beyond the cellular levels to the simulation of complex neural and cognitive function, existing ethical regulatory frameworks face applicability challenges. Technically, brain-like structures’ self-organization and enhanced synaptic activity push them beyond “passive tissues” into a gray area of potential perception, autonomy, or consciousness [
57,
58]. Cognitively, public sensitivity to “human-likeness” has increased, extending ethical concerns to the transparency and controllable terminal functions [
59,
60]. Institutionally, current ethical frameworks, based on ESCs, animal experiments, or data governance, lack dynamic response mechanisms for evolving brain-like structures and cross-domain integration [
61,
62] (e.g., AI interaction, interspecies chimeras, neural simulation).
This paper compares ethical guidelines from key international and national bodies, including the Organisation for Economic Co-operation and Development (OECD), the International Society for Stem Cell Research (ISSCR), the US National Institutes of Health (NIH), China’s National Science and Technology Ethics Committee, and the UK’s Nuffield Council on Bioethics. OECD’s “Recommendation on Responsible Innovation in Neurotechnology” emphasizes proactive responsibility and cross-sectoral coordination, embedding technology governance into social governance, focusing on identity, privacy, and human rights [
57,
63]. ISSCR’s “Guidelines for Stem Cell Research and Clinical Translation” (2021) sets ethical boundaries for organoid research, requiring stricter review for enhanced electrophysiological activity or potential consciousness [
64]. Its implementation relies on research institutions’ self-discipline, raising questions about “soft law ethical constraints” [
58]. NIH’s framework is more conservative, anchored in traditional human material use norms. Its “Guidelines for Human Stem Cell Research” cover organoids but are cautious on neural complexity and moral status, focusing on informed consent compliance rather than sophisticated assessment tools for potential consciousness [
12,
65]. This “preventive ethical compliance” model may struggle with emerging brain-like models [
61,
62].
Differences in institutional design reflect varying responses to ethical pressures: the OECD emphasizes coordination and public responsibility, the ISSCR focuses on self-discipline and graded management, and the NIH prioritizes legal procedures and individual rights. This divergence shows that global ethical governance lacks technological consensus and normative synergy [
66]. China’s “Ethical Guidelines for Human-Derived Organoid Research” was released by the National Science and Technology Ethics Committee [
67]. Unlike the ISSCR’s self-discipline model, China’s guideline mandates ethics committee review, classifies organoid research risks, and proposes early warning for consciousness features. It aims to prevent ethical conflicts by pre-setting boundaries, emphasizing traceability and cross-border cooperation, and forming a closed-loop regulatory model [
68]. This reflects an administrative compliance-centered state governance logic and strengthens national discourse in ambiguous ethical areas. Distinct from these regulatory or self-regulatory bodies, the UK’s Nuffield Council on Bioethics (NCB) acts as an independent advisory institution. Its policy briefing on neural organoids does not set rules but instead identifies emerging ethical challenges and urges the development of functional “markers” for assessment, thereby shaping the ethical discourse and providing guidance for future policy-making [
69]. While the ethical guidelines of the ISSCR, OECD, NIH, China, and UK’s Nuffield Council on Bioethics have developed relatively independent governance logics within their respective institutional contexts, they all face certain dilemmas in terms of the implementation of ethical practices, the definition of functional boundaries, and the clarity of implementation mechanisms. To clarify the distinctions among these governance approaches, Table 1 provides a detailed comparison. In essence, these frameworks represent a spectrum of governance philosophies: the ISSCR guidelines embody a model of scientific self-regulation driven by the research community; the OECD recommendations promote a transnational policy logic focused on responsible innovation and societal values; the NIH framework reflects a more conservative, compliance-driven approach tied to federal funding in the US; and China’s guidelines signal a state-led, top-downregulatory model emphasizing clear boundaries and national oversight. Adding a distinct perspective, the Nuffield Council on Bioethics provides forward-looking ethical analysis intended to inform and guide future policy.
Future governance needs stronger interdisciplinary cooperation, proactive ethical review, and public participation. The core tension in organoid and OI ethics isn’t a lack of principles but a structural disconnect between judgment criteria and governance logic. Rapid organoid evolution creates an urgent need to redefine the moral status based on “sentience-like” or “consciousness-like” capabilities. Some ethicists propose “gradual empowerment” based on organizational level and functional indicators [
13], while others strictly limit organoids to a “non-subject” status, emphasizing human-centered ethics [
70].
Governance logic diverges from “soft law ethics” to “mandatory regulation.” The OECD and ISSCR advocate for transnational cooperation and self-discipline [
63,
64]. Some countries have implemented legal regulation, incorporating “brain-like activity thresholds” into ethical review [
67]. These differences reflect cultural understandings of “life” and “human dignity,” national interests, technology strategies, and public expectations [
51,
71]. Ethical judgment tends to be proactive and risk-averse, while policy governance focuses on measurable outcomes and implementability. This objective function inconsistency creates structural tension in risk boundaries, technology tolerance, and public participation [
72]. Reconciling these logics under uncertainty is a key challenge for organoid ethical governance.
3 Conclusion and future outlook
Cerebral organoids and the burgeoning field of OI represent a paradigm shift in neuroscience and computing, yet they also push existing ethical and regulatory systems to their limits. This paper has systematically analyzed the multi-layered ethical challenges, ranging from foundational issues of consent and disposal to profound controversies surrounding potential consciousness and the risks of technological convergence. We have highlighted a critical “governance gap” between the pace of scientific innovation and the development of adequate oversight. The central argument of this paper is that a fundamental shift is required: from reactive, compliance-based regulation to a model of proactive, adaptive governance.
To bridge this gap and foster responsible innovation, we propose the following concrete, actionable recommendations. (1) Establishment of specialized ethics committees: research institutions should form specialized Neuro-Organoid Ethics Committees (NOECs), composed of interdisciplinary experts in neuroscience, ethics, law, and computer science, to provide tailored oversight for this unique research area. (2) Definition of functional thresholds: international bodies, such as the ISSCR, must collaborate with neuroscientists and ethicists to define minimal functional thresholds (e.g., specific EEG-like patterns, evidence of long-range network synchrony) that would automatically trigger a higher level of ethical scrutiny and review. (3) Fostering international collaborative governance: a dedicated international collaborative platform, perhaps under the aegis of the OECD or the World Health Organization (WHO), should be established to harmonize ethical baselines, share best practices in regulation, and prevent a “race to the bottom” in ethical standards.
Looking ahead, the trajectory of this technology points toward increasingly complex scenarios that will further challenge our ethical and legal frameworks. Future technological developments may include the creation of more sophisticated assembloids [
74] with sensory inputs, or even the deployment of organoids on platforms like the International Space Station for microgravity research, which will raise novel jurisdictional questions for regulatory oversight [
75]. Who, for instance, would regulate an organoid with advanced functions in orbit?
Consequently, future governance must be dynamic. We anticipate a move toward “adaptive governance,” where policies are not static but are designed to be reviewed and updated on a fixed schedule (e.g., every two to three years) in response to key technological milestones. This approach embraces uncertainty and builds institutional resilience. Ultimately, the ethical journey with cerebral organoids is not about finding final answers but about building robust processes for continuous dialog, reflection, and negotiation. As we explore the frontiers of biological intelligence, we are simultaneously compelled to redefine the ethical boundaries of what it means to be both “human” and “intelligent.” Ensuring this exploration is guided by wisdom and foresight is the paramount challenge for scientists, ethicists, and society as a whole.