Two opposing trends shape contemporary understandings of science and technology. One is scientism, arguing that science is reasonable and characterized by a defense of science and the other, anti-scientism, strongly criticizing science and technology as harmful to the modern world. This article argues that both positions have limitations, and a stance of critical reconsideration (shendu) toward science should be adopted. Marx’s analysis of science and technology offers rich insights, adopts a unique historical-practical perspective, and provides an important benchmark for this reconsideration. This article advocates a comprehensive framework of values that is both pluralistic and complementary to navigate a dynamic tension between objectivity and uniqueness, between universality and locality, between rationality and irrationality, and between instrumental rationality and value rationality. This pluralistic and open philosophy of science and technology aims to realize the unity of pursuing truth, aspiring to goodness, attaining beauty, and reaching sagacity through the interaction and integration between science and the humanities, including ethics, religion, art, and faith.
The era of big data is a new phase of highly developed informatization, where information technology is deeply integrated with human production and daily life, creating new spaces for human existence. This has injected new impetus into the transformation and innovation of social governance, giving rise to the key issue of how to make social governance smarter. Making social governance smarter is an inevitable requirement for Chinese modernization, reflecting the development direction of the governance modernization of socialism with Chinese characteristics, and it is the response and call of a smart society to the era of big data. To explore how to make social governance smarter in the era of big data, it is necessary to determine its goals, directly confront the essential concerns and challenging issues in the current development of smart social governance, and explore logical approaches to enhance the level of smart social governance from various dimensions such as, institutions, thinking, subjects, methods, and capabilities.
The formidable capabilities of large artificial intelligence (AI) models are profoundly reconfiguring human society, establishing them as a critical locus of contemporary academic inquiry and, unequivocally, a vital domain for Marxist theoretical investigation. Drawing on Marx’s theory of the living labor subject, this article argues that large AI models exhibit characteristics of intentionality, autonomy, and the capacity for valorization, thereby manifesting key features of the general intellect and the essential powers of humans. The integration of large models and the knowledge-based living labor subject is expressed as a synthesis of antithetical pairs: “passivity” and “agency,” the “static” and the “dynamic,” and “value transfer” and “value creation.” This trend toward human-machine integration not only profoundly reconstructs the knowledge-based living labor subject but also necessitates a new development and enrichment of Marx’s theory of the knowledge-based living knowledge labor subject.
Currently, the ethical risks posed by frontier science and technology, such as life sciences and artificial intelligence, are raising serious concerns in the international community, while ethical governance of science and technology in China is progressing at the institutional level. To advance this process, it is necessary to identify the distinctive characteristics of frontier science and technology, expose the deep-seated ethical risks they generate, deepen our understanding of the structural characteristics of the sociotechnical system, and explore and reconstruct the fundamental logic of risk perception. First, we should recognize how emerging science and technology transforms the sociotechnical system, thereby comprehending the structural nature of the ethical risks in frontier S&T. Second, drawing on open practice regarding ethical governance of frontier S&T, we should view risk perception and ethical norms as forms of collective cognition and provisional working assumptions continuously updated by relevant groups. Third, we should emphasize that risk perception in frontier science and technology is systemic and malleable, which both shapes and enlarges the space for innovation. Based on this analysis, we propose preemptive mitigation strategies: define ethical boundaries for scientific inquiry; set appropriate ethical and legal thresholds; develop open and comprehensive governance mechanisms; enhance public literacy in science and technology ethics; and uphold the principle of proportionality.
The philosophy of engineering science lies at the intersection of the philosophy of engineering and the philosophy of science. This paper reviews the progress in the research on the philosophy of engineering science, covering five key areas: history of thought, ontological issues, epistemological issues, methodological issues, and axiological issues. Building on this foundation, the paper discusses the basic strategies and specific methodologies for the research on the philosophy of engineering science, thereby clarifying the direction for future research in this field.
Within the context of broadly construed embodied cognitive theory, an examination of several focal issues across the three most representative and currently active approaches in cognitive science—cognitive psychology, neuroscience, and artificial intelligence—can reveal the theoretical dilemmas these approaches confront and their potential resolutions. For the representation problem in cognitive psychology, beyond propositional representation, body-based perceptual representations can be acknowledged as primary representations, offering a causal account of cognition and action at the foundational level; for neuroscience, the viable approach to solving the hard problem of consciousness is a synthetic study combining first-person and third-person perspectives; and for artificial intelligence, advocating a method that integrates top-down and bottom-up methods to construct a cognitive architecture of mind suggests that reliable moral agent—both upholding human values and considering machine interests—should be human-machine integrated extended cognitive systems. These examinations reveal that, under certain conditions, philosophy and cutting-edge cognitive science not only present a mutual challenge but also exert a reciprocal effect in advancing both scientific and philosophical development, offering significant insights for achieving their interdisciplinary integration.