Research article

Dual Track Strategies for Technologically Augmented Humans: Mitigating Societal Conflicts

Hyunghun Kim 1 , * https://orcid.org/0000-0002-3013-075X
Author Information & Copyright
1MechEcology Research Center, Seoul, Korea
*Corresponding Author : Hyunghun Kim, MechEcohlogy Research Center, 150, Mokdongdong-ro, Yangcheon-gu, Seoul, 08014, Korea, Tel: +82-2-2643-2323, E-mail: relyonlord@gmail.com

ⓒ Copyright 2023 MechEchology Research Center. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Aug 06, 2023; Revised: Sep 01, 2023; Accepted: Sep 11, 2023

Published Online: Oct 01, 2023

Abstract

The emergence of technologically augmented humans (TAHs), enabled by brain–artificial intelligence interface (BAI) technologies, introduces a profound shift in the relationship between cognition, capability, and societal participation. To govern this transformation ethically, a dual-track approach is proposed. The first track supports inclusive integration by prioritizing access for individuals with cognitive or neurological disadvantages, framing augmentation as a means of restoring dignity and enabling civic participation. The second track enforces strong oversight over high-performance uses in military and industrial domains, where unregulated deployment may intensify inequality, geopolitical tension, and psychological risk. Effective implementation depends on three institutional foundations: a neuro-civic infrastructure to ensure access to foundational systems, a neuroethical commons to embed value deliberation into technological development, and a global governance framework to safeguard shared rights across borders. Anchored in principles such as cognitive liberty, mental privacy, and reversibility, this strategy ensures that the rise of TAHs supports democratic inclusion rather than reinforcing structural divides. The trajectory of augmentation will depend not only on its technical design but on the moral and institutional choices that shape how TAHs are introduced into society.

Keywords: technologically augmented humans; brain–artificial intelligence interface; cognitive equality; neuroethical governance; dual-track policy

TECHNOLOGICALLY AUGMENTED HUMANS

Their thoughts move in silence, threading through unseen networks that weave through the marrow of the city. Signals shimmer across augmented nerves, fragile yet swift as whispers made of light. They are no longer like the rest of us. They have crossed a threshold where flesh yields to code, where the mind no longer ends at the skull. These are Technologically Augmented Humans (TAHs), beings whose natural limits are rewritten by machines. Their consciousness is no longer solitary but intertwined with artificial intelligence (AI), expanding and accelerating beyond human measure. They do not merely see the world but grasp its hidden structure. They do not pass through doors but slip through the veins of cyberspace. They do not simply think but shape reality with the immediacy of action (Gibson, 1984).

In the lower reaches of the city, where the air hums with static and data bleeds through the seams of corroded steel, a man named Cain moves among the digital shadows. Once, years ago, he was broken by a devastating injury. Desperation led him to the Stem implant, a second mind sutured into his own. In the beginning, it steadied his failing body. Later, it whispered strategies he had never learned. Now, it acts when he hesitates, and Cain can no longer be certain where he ends and the machine begins (Whannell, 2018).

Above him, the towers rise, their windows blurred with surveillance feeds and the endless spill of signal streams. Within these glass citadels, others like Cain mold the world not with armies or wealth, but with the breathless reach of thought. Their neural systems override barriers, dissolve records, and bend the flow of global systems. They do not wield swords nor hoard riches. Their dominion is bandwidth, invisible yet absolute (Gibson, 1984).

Meanwhile, the unaugmented drift further from the center of power. They walk with unassisted bodies, think with organic minds, and speak in slow, deliberate words. The systems that govern their lives slip beyond their grasp, becoming as intangible as mist. Power no longer resides in towers or weapons but is etched into cognition itself, flowing silently through unseen networks, untouchable and unknowable to those left behind.

The conflict that arises is not fought with banners or blades but with perception and velocity. It is a war of cognition, a quiet sundering. TAHs react within heartbeats, predict the unseen tremors before they reach the surface, and move between physical and digital worlds as if there were no boundary at all. The rest of humanity struggles merely to witness what has already passed. They are not conquered by force but by speed, by vision, by access to realms closed against the natural mind.

Some proclaim it as the next step in evolution. Others whisper of betrayal and loss. Yet amid all the proclamations and denials, one truth remains immovable. Intelligence has become the final frontier, not only in magnitude but in origin and nature. Intelligence, when sharpened by algorithm and fused with nerve, ceases to be a birthright. It becomes a forge of power, a passcode to dominion, a wall against the unaugmented, and a weapon more decisive than any ever fashioned by human hands (Whannell, 2018).

And this is where the real story begins.

Because if this imagined future sounds far away, it may be closer than we think.

The proliferation of TAHs is no longer confined to speculative fiction. Brain-Artificial Intelligence interfaces (BAIs), neural implants, and delicate systems designed for cognitive enhancement are actively being developed. In certain experimental and early clinical domains, these technologies have already undergone implementation or demonstrated functional deployment (Chaudhary et al., 2022; Hampson et al., 2018; O’Doherty et al., 2011). As their complexity increases and their availability expands, they may begin to influence not only how individuals think and act, but also how they participate in society.

While the potential for enhanced capability is evident, the associated risks are also considerable. This article examines the societal transformations that may follow widespread adoption of TAHs. It considers the implications for inequality, employment, and cognitive autonomy. It explores how such technologies might redefine human potential and outlines several possible future trajectories. Finally, it offers preliminary suggestions for how society might guide the development and implementation of these systems in ways that promote broad accessibility and uphold shared ethical commitments.

HISTORICAL ANALOGOUS CASES: WHEN TECHNOLOGY DIVIDES

The notion that advanced technologies could divide society may initially appear speculative. However, historical records provide several cases in which disruptive innovations contributed to significant social tensions, particularly when access to new capabilities was uneven. Reviewing such precedents may offer valuable insights into the types of challenges that could arise if TAHs were to form a distinct and empowered group.

One of the most illustrative examples comes from the Industrial Revolution, which began in the late eighteenth century. The mechanization of labor, especially within the textile industry, significantly improved production capacity. At the same time, it rendered the skills of many artisanal workers obsolete. This dynamic contributed to the rise of the Luddite movement in England, where handloom weavers and other displaced workers protested by destroying mechanized looms, which they viewed as a direct threat to their livelihoods (Hobsbawm, 1952). Importantly, this movement did not represent opposition to all forms of technology. Rather, it reflected frustration over exclusion from technological benefits and the absence of institutional support for affected populations. As such, this episode illustrates how technological advancement, when coupled with labor stratification and a loss of individual agency, can produce resistance and unrest (Sale, 1995).

Another informative case involves the development and implementation of early medical augmentation technologies, such as cardiac pacemakers, prosthetic limbs, and cochlear implants. Although these technologies were generally well accepted in therapeutic contexts aimed at restoring physical function, they were not universally embraced. For instance, the introduction of cochlear implants provoked significant concern within the Deaf community. Many individuals and scholars within that community viewed such interventions as potentially eroding Deaf identity, perceiving the technology as a vehicle for cultural assimilation and the loss of sign language as a primary mode of communication (Blume, 2009). This case highlights that bodily enhancement, even when medically justified, may provoke cultural and ethical tensions, particularly when it challenges established conceptions of personhood or community autonomy.

A third domain of relevance is the sphere of military enhancement. Since the late twentieth century, technological developments in this area have included robotics, neural-linked systems, and AI-assisted targeting mechanisms. As noted by Singer (2009), such innovations have initiated ongoing debates within military institutions concerning the distribution of authority, the ethical boundaries of autonomous systems, and the evolving nature of human judgment in combat. While these tools offer potential improvements in operational efficiency, they also introduce disparities within military units. Soldiers who operate with AI-enhanced or neurally integrated systems may experience shifts in perception and identity that differentiate them from non-augmented counterparts, both in battlefield coordination and in broader social reintegration (Lin et al., 2013).

Taken collectively, these cases suggest a recurrent pattern. When technologies significantly alter human capacity, whether physical, cognitive, or sensory, they may generate new social divisions unless accompanied by ethical foresight and equitable frameworks for access. These precedents indicate that the rise of TAHs is not without historical parallels. Nevertheless, such developments may intensify previously observed dynamics of exclusion, particularly if augmentation becomes closely associated with social status, operational capability, or political agency.

SOCIAL, ECONOMIC, AND ETHICAL TENSIONS FROM HUMAN AUGMENTATION

As neurotechnologies such as BAIs grow in complexity and capability, they generate increasingly intricate concerns regarding distributive fairness, equitable access, and the preservation of personal identity. Should access to these technologies remain restricted to a privileged subset of the population, society risks facing an exacerbated divergence in cognitive, economic, and social capacities, thereby deepening systemic inequalities (Fukuyama, 2002). In such a scenario, stratification may no longer hinge solely on disparities in economic resources but may instead be fundamentally restructured around differences in cognitive architecture, as enhanced individuals can acquire capacities that transcend conventional human limitations (Lin et al., 2013). Yet beyond structural inequalities, the impact on individuals themselves warrants closer examination.

Individuals excluded from cognitive augmentation face more than diminished personal capabilities; they risk systemic marginalization within critical social, economic, and political arenas. Ienca & Andorno (2017) argue that access to neurotechnologies may shape not only individual success but also broader societal participation, thereby reinforcing structural inequities. Similarly, Jasanoff (2005) emphasizes that the integration of emerging technologies into social frameworks can amplify existing disparities if normative safeguards are not established. Consequently, non-augmented individuals may find themselves progressively excluded from decision-making processes, leadership positions, and civic influence, not due to inferior inherent capacities, but because of a widening infrastructural gap. Such exclusion raises profound ethical concerns about the principles of equality and citizenship.

The ethical implications of the development of TAHs are considerable. As the distinction between biological thought and algorithmically assisted cognition diminishes, traditional concepts of moral equality may be placed under pressure. Fukuyama (2002) raises concern that technologies which substantially alter the human condition challenge the foundational liberal assumption that all individuals possess equal moral worth. Similarly, Andorno (2009) highlights the centrality of dignity and equality within global bioethical frameworks. If technologically augmented individuals are perceived to possess enhanced moral or intellectual authority, this perception may undermine democratic legitimacy and the sense of shared citizenship on which social cohesion often depends.

Economic consequences are likely to be equally significant. Individuals who receive cognitive enhancement may become more competitive within high-demand occupational sectors, thereby consolidating their presence in influential and well-compensated roles. In contrast, those without access to such technologies may experience decreasing opportunities in cognitively intensive fields. Lin et al. (2013) identify the strategic risks associated with unequal access to augmentation in military contexts. When these patterns are extended to civilian economies, they may contribute to persistent intergenerational cycles in which economic inequality is both caused and reinforced by differential access to cognitive enhancement.

Beyond economic disparities, these tensions may be further intensified by cultural and ideological divergence. While some individuals and communities view human enhancement as a logical extension of therapeutic intervention or even as a progressive step in human evolution, others perceive it as a disruption of natural or spiritual integrity. In particular, concerns have been raised within cultural and religious groups that neuroprosthetic interventions may compromise not only physical function but also core aspects of personal identity (Blume, 2009). As cognitive enhancement technologies increasingly become markers of societal value, they risk deepening cultural fragmentation and fostering alienation among those who resist such transformations (Fukuyama, 2002; Ienca & Andorno, 2017).

At the geopolitical level, early integration of TAHs with advanced neural platforms may provide some states with notable strategic advantages. As Singer (2009) observes, military forces equipped with autonomous and networked technologies already demonstrate superior effectiveness across operational domains such as combat, intelligence analysis, and cyber engagement. The deployment of BAI-enabled TAHs within these systems may further amplify such advantages, positioning enhanced individuals as force multipliers across both physical and cognitive dimensions of national power.

Taken together, the evolution of neurotechnologies such as BAIs and the selective access to cognitive augmentation may give rise to multi-dimensional challenges that extend beyond individual enhancement. These developments raise concerns about the potential entrenchment of economic inequalities, the intensification of cultural and ideological divisions, and the erosion of foundational ethical principles related to equality and shared citizenship. If not carefully addressed through proactive regulatory, societal, and ethical measures, such trends could gradually contribute to the destabilization of national cohesion and the weakening of global democratic norms.

POSSIBLE FUTURE SCENARIOS

As BAI technologies and cognitive augmentation systems continue to evolve, several plausible developmental trajectories for human society are beginning to emerge. These potential pathways should not be understood as mutually exclusive. Rather, they may unfold simultaneously or interact in overlapping and complex ways. Identifying and analyzing these scenarios can support a more refined understanding of both the risks and opportunities associated with TAHs. In turn, this approach may assist policymakers in designing regulatory and governance strategies that are responsive to the specific characteristics and implications of each emerging configuration.

Widespread Augmentation: A New Baseline for Humanity

In a relatively favorable scenario, the convergence of technological progress and substantial cost reduction may render BAI augmentation widely accessible. Much like smartphones and internet access have become embedded in daily routines, BAI systems could be integrated into ordinary life and regarded as a common utility rather than a marker of elite status.

Wolpaw & Wolpaw (2012) primarily discuss the potential of brain-computer interface (BCI) technologies to restore and enhance motor functions, while also noting emerging possibilities for supporting cognitive processes. If these systems are adopted at scale, they may contribute indirectly to broader social improvements, particularly in areas such as learning and communication. Although Fukuyama (2002) warns of the dangers of unequal access to enhancement technologies, some scholars argue that equitable distribution could increase collective intelligence. In turn, this might enhance society’s ability to confront complex global challenges such as climate change and pandemics (Goertzel, 2014).

However, even this optimistic projection presumes a foundation of strong institutional support. For BAI systems to be widely adopted, they would either need to become highly affordable or be supported through extensive public subsidies. Given the financial limitations faced by many national healthcare systems, it is more likely that only simplified, low-cost BAI devices would reach the general population. These would be comparable to currently available assistive technologies and would likely become standard tools. While such devices could help raise the cognitive baseline and promote wider participation, they would also define a new societal minimum. This threshold may ensure that no individual falls below a certain level of cognitive functionality, but it would not prevent further stratification among those with access to more advanced technologies.

As new generations of enhancement systems are introduced, they are expected to be more invasive, more precise, and considerably more expensive. A smaller, more privileged group may adopt these advanced tools. This trend may lead to the reappearance of cognitive hierarchies. In this scenario, inequality would no longer center on whether a person is augmented, but on the degree and complexity of their augmentation. Even in societies that appear outwardly egalitarian, a more nuanced form of neuro-stratification could emerge. This would reflect differences in technological integration rather than in formal rights or visible markers of class (Andorno, 2009; Fukuyama, 2002). Such subtle forms of stratification could remain largely invisible to conventional social analysis, yet they may exert profound and enduring effects on individual opportunities, patterns of social mobility, and the perceived legitimacy of social institutions. Thus, future social structures may increasingly be shaped not by overt legal or economic distinctions, but by subtler divides arising from differential levels of technological integration.

Stratified Society: Cognitive Class Structures

In a more constrained scenario, augmentation technologies may remain costly, technically complex, or medically uncertain. Under such conditions, access may be limited to specific groups such as military personnel, corporate executives, or affluent individuals in the private sector. Over time, this selective access may lead to the emergence of a societal structure in which a cognitively enhanced minority becomes increasingly distinct from an unaugmented or naturally functioning majority. This outcome would reflect patterns of stratification associated with unequal access to transformative technologies (Lin et al., 2013). These developments are consistent with broader theories of class formation. According to such theories, control over cognitive or technological resources, rather than ownership of material capital, may become the central basis of social division (Wright, 1997).

Historical parallels can be observed in earlier technological revolutions. In those cases, exclusive access to tools such as mechanical skills, literacy, or computational literacy produced social hierarchies that persisted across generations (Beniger, 1986; Hobsbawm, 1952; Sale, 1995). Building on these historical patterns, there is growing concern that TAHs may come to dominate professional sectors, not only reinforcing new forms of social stratification but also reshaping dominant epistemologies by redefining what counts as knowledge, skill, and competence within society. As Foucault (1972) emphasized, epistemological frameworks are historically contingent and often reflect prevailing structures of power. Similarly, Collins (1998) argues that dominant forms of knowledge are shaped by social networks and institutional monopolies, suggesting that TAHs could reconstitute intellectual hierarchies to align with their enhanced capabilities.

Blume (2009), in his study of cochlear implants and the Deaf community, notes that opposition to enhancement technologies, exemplified by cochlear implants, often arises from fears of cultural displacement rather than from rejection of technology itself. In a society marked by cognitive stratification, unaugmented individuals may increasingly be regarded as less capable and less relevant. Fukuyama (2002) refers to this possibility as the rise of a “post-human caste,” a group marginalized through technological exclusion.

This scenario involves a heightened risk of social fragmentation. The danger intensifies if augmentation becomes a prerequisite for inclusion in economic systems or political decision-making. Without legally enforceable protections against discrimination, comprehensive public infrastructure, and guarantees of individual autonomy, new divisions based on cognitive capacity may gradually replace traditional fault lines historically drawn along class, race, or geographic distinctions.

Partial Adoption with Cultural Resistance

In a more restrained future, society may adopt a cautious and selective approach to BAI integration. In such a scenario, BAI technologies would primarily be implemented within clinical and therapeutic domains. Applications might include neurorehabilitation, particularly for individuals recovering from spinal cord injuries, or cognitive support for those with neurodegenerative conditions. In contrast, the use of these systems for elective enhancement in otherwise healthy individuals may remain limited or contested. Under this model, enhancement is permitted only when medically necessary, not as a general means of performance optimization.

Historical cases offer insight into how cultural values shape the reception of biomedical technologies. Blume (2009), in his analysis of cochlear implants, and Andorno (2009), in his discussion of bioethics and human rights, document how previous innovations faced resistance that emerged not from rejection of science itself but from concerns related to identity and moral integrity. These experiences suggest that future regulatory systems governing BAI may be strongly influenced by established cultural and ethical norms. In such contexts, the application of precautionary principles, long recognized in the fields of bioethics and responsible innovation, may provide a foundation for policy development that seeks to balance opportunity with restraint (Racine, 2010; Stilgoe et al., 2013).

This cautious pathway may offer several benefits. These include a slower and more manageable pace of societal change, a reduced likelihood of unintended consequences, and extended time for ethical deliberation. However, the strategy is not without risks. One significant concern is the potential for geopolitical imbalance. As Singer (2009) observes, countries that adopt military robotics and autonomous technologies early often gain substantial strategic advantages. A similar dynamic could arise in relation to BAI if national regulatory responses diverge significantly. In such a case, countries that accelerate BAI adoption may establish new forms of technological dominance. The resulting imbalance could redefine global power relations, with augmentation becoming a critical dimension of international competition.

RECOMMENDATIONS FOR A BALANCED FUTURE

As technologies enabling brain–AI integration and cognitive enhancement continue to advance, the importance of anticipatory governance becomes increasingly clear. These systems should not be understood solely as technical instruments. They are also sociopolitical agents with the capacity to influence power dynamics, reshape labor markets, and alter prevailing understandings of individual identity and personhood. In order to ensure that BAI does not become a mechanism for deepening social inequality or exclusion, a policy framework must be developed that is ethically informed, forward-looking, and capable of addressing multiple dimensions of risk and opportunity. A coordinated and multifaceted approach is necessary to align technological innovation with principles of fairness, accessibility, and social responsibility.

Neuro-Civic Infrastructure and Equitable Access

As BAI technologies approach broader societal deployment, a critical question emerges. Who will have access to these capabilities, and under what conditions? If distribution is left solely to the mechanisms of private markets, there is a significant risk that existing inequalities will be replicated. Historical experience suggests that, in the absence of intentional public guidance, transformative technologies have disproportionately benefited those with the least marginal need, while systematically bypassing populations with the greatest potential to benefit from their application (Fukuyama, 2002; Jasanoff, 2005).

In order to avoid such an outcome, the development of a future that respects neurodiversity, one in which varied cognitive profiles are supported and valued, requires a shift in approach. Rather than relying on market incentives alone, there must be a transition toward public stewardship and inclusive infrastructure. This entails a deliberate investment in what may be referred to as a neuro-civic infrastructure. Such a system would include institutional mechanisms, public policies, and access frameworks that prioritize collective well-being over commercial gain.

Examples of such a framework include the following:

  • Publicly supported brain–machine interface clinics that incorporate BAIs into rehabilitation, educational assistance, and assistive technology services.

  • Regulatory environments in which developers may test neurotechnologies under public oversight to ensure ethical compliance.

  • Universal access programs for basic cognitive enhancement, modeled after public internet initiatives, which aim to promote equitable availability of foundational augmentation, particularly where it supports functional participation and social inclusion.

These proposals are not without precedent. Large-scale public health initiatives, such as the Expanded Programme on Immunization or national dialysis systems, have demonstrated that governments can successfully democratize access to life-altering technologies (Saran et al., 2015; WHO, 2001). They have done so through the combination of public subsidy, national standards, and institutional commitment (Andorno, 2009; Saran et al., 2015; WHO, 2001). Such examples suggest that equitable access is not an unattainable ideal. It is an administratively viable goal when treated as a public good.

For this reason, public authorities must formulate BAI-specific procurement and deployment strategies. These strategies should focus on reducing cost barriers in therapeutic and accessibility-driven contexts. Such efforts would include systems intended to restore function or support participation for individuals with neurodegenerative disorders, spinal cord injuries, or learning difficulties. Conversely, augmentation platforms that are designed to exceed standard human performance, particularly for use in military, corporate, or elite scientific domains, must be regulated with greater scrutiny. Oversight in these cases should involve ethical review processes, structured licensing models, transparency requirements, and mechanisms of public accountability, all of which aim to prevent the emergence of an unregulated “neuro-elite” (Lin et al., 2013).

Ultimately, equitable access to BAI is not merely a matter of distributing hardware. It is about extending opportunity, preserving dignity, and safeguarding the shared foundations of future potential. Achieving this objective demands governance that treats cognitive augmentation not as a consumer product but as a civic utility. In this view, augmentation may be likened to a public library of cognition or a form of universal broadband for the mind. Absent such a vision, there is a risk of entering an era characterized by neurocapitalism and neuroaristocracy. In such a world, intelligence could be commodified, and human worth may become contingent on one’s capacity to purchase enhancement.

Neuroethical Commons and Ethical Literacy

Technological advancement alone does not ensure social legitimacy. The acceptance of BAI systems will depend not only on their practical effectiveness but also on the moral and cultural frameworks through which they are interpreted, debated, and governed. As BAI increasingly intersects with the core elements of personhood, including notions of identity, autonomy, and cognition, its surrounding ethical structures must reflect the same degree of complexity as the systems it seeks to influence (Fukuyama, 2002).

In light of these developments, there is a growing need to cultivate a neuroethical commons. This concept does not imply a centralized authority, but rather a distributed network of deliberative spaces, educational initiatives, cultural institutions, and policy mechanisms. These venues should facilitate inclusive dialogue among clinicians, engineers, educators, legal scholars, patients, artists, and representatives of historically marginalized communities. Through such engagement, the ethical contours of emerging neurotechnologies may be shaped collectively. Participatory Technology Assessment models, notably exemplified in Switzerland and the Netherlands, offer an institutional precedent for ethically inclusive foresight, contrasting sharply with technocratic modes of governance (Jasanoff, 2005).

The foundation of this commons lies in the cultivation of ethical literacy. This literacy should not be restricted to academic or policy-making circles. It must be fostered across educational levels and social institutions, including secondary schools, vocational programs, local health networks, and publicly accessible digital platforms. A truly transdisciplinary approach is required, one that connects neuroscience and moral philosophy with fields such as disability studies, cultural anthropology, political theory, and comparative legal scholarship. Ethical literacy, broadly disseminated and institutionally integrated, will be critical for navigating the complex social transformations brought about by emerging cognitive technologies.

In parallel, ethical deliberation must be grounded in fundamental principles of human dignity and moral universality. Global bioethical frameworks, such as that proposed by Andorno (2009), emphasize that shared ethical foundations are essential for guiding technological change. Historical cases illustrate the stakes involved: the Deaf community’s response to cochlear implants, as documented by Blume (2009), reveals how even medically beneficial interventions may be perceived as existential threats to cultural identity. Such examples highlight the need for sensitivity to diverse perspectives when introducing life-altering technologies.

Nevertheless, discourse that fails to evolve alongside lived realities risks devolving into mere ceremony. Meaningful incorporation of lived experiences into ethical deliberations, particularly through participatory frameworks that empower affected communities, has been recognized as essential for truly inclusive ethical practices (Pratt, 2021). To remain relevant, ethical engagement must be iterative and dialogical. It should be informed by empirical outcomes, grounded in community experience, and responsive to shifts in public attitudes. The evolution of ethical standards in areas such as organ transplantation and stem cell research further illustrates the necessity of integrating diverse normative perspectives. These perspectives should include principles rooted in justice, dignity, spiritual traditions, and local knowledge systems, rather than relying exclusively on expert consensus or utilitarian frameworks (ISSCR, 2016; UNOS, 2015).

The neuroethical commons, therefore, should not be viewed as a static institution. It should be understood as a dynamic environment in which ethical standards are actively negotiated, questioned, and refined in parallel with the technologies they are meant to govern. Within such a framework, ethical governance is not a secondary or reactive measure. Rather, it is a generative force that helps to shape innovation from its earliest stages. It represents the ethical foundation upon which inclusive and socially legitimate technological futures may be built.

Although the neuroethical commons may enhance societal legitimacy through public engagement and pluralistic deliberation, these values must also be translated into enforceable legal structures. Ethical foresight, while necessary, cannot replace formal legal protection. The next phase of responsible BAI integration must therefore involve embedding these shared values within the architecture of global regulatory governance.

Pillars of Global Neurotechnology Governance

As BAI technologies continue to develop and transcend national boundaries, the need for global governance has shifted from theoretical consideration to practical necessity. These systems do not operate in isolation from human experience. They engage directly with thoughts, memories, intentions, and the sense of self. Without international coordination, the spread of BAI may contribute to a fragmented global environment marked by unequal protections, transnational exploitation, and ethically unregulated experimentation.

To mitigate these risks, it is essential to begin constructing a framework of shared international standards for neurotechnology governance. This proposed structure, which may be referred to as a Global Neuroframework, would comprise norms, safeguards, and cooperative protocols designed to guide the ethical and secure development of emerging technologies. Historical precedents offer instructive models. The Geneva Conventions established binding norms for wartime conduct. Similarly, the World Health Organization has developed globally recognized standards for managing public health emergencies (Ienca & Andorno, 2017; International Committee of the Red Cross, 1949; WHO, 2001). A comparable framework is now needed to address the particular challenges posed by BAI.

In order for BAI technologies to be integrated in ways that are both responsible and equitable, international instruments must specify and uphold ethical, legal, and technical criteria. These criteria must do more than prevent harm. They must also affirmatively protect core human values, including mental autonomy, cognitive integrity, and fairness in environments increasingly shaped by digital and neural augmentation.

Building upon recent scholarship and emerging policy proposals, it is proposed that the Global Neuroframework be anchored around six mutually reinforcing pillars. Each pillar delineates a critical domain where rights, risks, and responsibilities intersect. Specifically, these domains encompass ethical oversight, transnational regulatory cooperation, data governance, equitable access, neurosecurity, and participatory governance. Collectively, these pillars establish a foundational architecture for the global integration of BAI, ensuring that the principles of justice, autonomy, and human identity are not compromised. The sections that follow will examine each of these domains in greater detail, providing a structured exploration of their respective challenges and imperatives. In light of the accelerating pace of neurotechnological innovation, the establishment of a proactive and globally coordinated governance framework becomes indispensable to align scientific progress with the foundational ethical commitments of justice, autonomy, and human dignity.

Cognitive Liberty
The Right to Mental Self-Determination and Freedom from Forced Neurotechnological Intervention

At the heart of ethical neurotechnology lies cognitive liberty. This principle is the foundational right of individuals to govern their own minds. It guarantees that no external entity, whether it be a government, corporation, or algorithm, may access, alter, or interfere with mental activity without the individual’s fully informed and voluntary consent (Bublitz & Merkel, 2014; Sententia, 2004). Cognitive liberty affirms the individual’s sovereign authority over the functioning, modulation, and extension of cognitive processes. It preserves mental autonomy as an essential counterpart to bodily autonomy.

This right protects against both direct interventions, such as compulsory BAI implantation in medical, military, or educational settings, and more covert methods, including state-administered neural stimulation intended to suppress dissent or algorithmic manipulation of belief systems that occurs through real-time neural feedback (Farah, 2012; Ienca & Andorno, 2017). These forms of interference do not forcibly implant specific thoughts, but they can subtly reshape mental landscapes without the individual’s full awareness or consent.

Cognitive liberty encompasses two interrelated dimensions. Negative liberty refers to the freedom from unwanted cognitive interference, forced modulation, or surveillance of mental states. Positive liberty refers to the freedom to access supportive cognitive tools that restore or expand personal agency, particularly for individuals with impairments in speech, mobility, or executive function (Ienca & Andorno, 2017).

For example, consider a soldier who voluntarily consents to the implantation of a BAI with the belief that it will enhance operational resilience. Although the initial decision appears to be free and informed, the system may gradually influence their thought processes by steering perceptions toward heightened threat sensitivity or moral justification of questionable actions. No explicit coercion is involved, yet the cognitive space is subtly restructured, diminishing the individual’s ability to autonomously shape the generation and progression of their thoughts (Bublitz, 2014).

In contrast, imagine a student with a learning disability who adopts a BAI designed to assist with concentration. Here, the use of neurotechnology supports the student’s agency and inclusion. In this case, cognitive liberty is affirmed through the enhancement of personal autonomy rather than its erosion (Ienca & Andorno, 2017; Yuste et al., 2017).

To meaningfully secure cognitive liberty, protections must be embedded both in legal frameworks and in the design of neurotechnological systems. Such protections may include user-controlled neuro-consent dashboards that allow individuals to manage access to cognitive domains in real time. They may also involve built-in safeguards that block unauthorized neural signal access, transparent logs of data interactions, and real-time alerts informing users of any attempts at modulation or unauthorized access.

Cognitive liberty is not a luxury or a technical nuance. It constitutes the constitutional right of the mind, particularly in an era where thoughts can be not only observed but also shaped. It ensures that the mental realm remains a sovereign space of freedom, creativity, and dignity, authored solely by the individual who inhabits it (Ienca & Andorno, 2017). Ultimately, cognitive liberty is not determined by the mere presence or absence of technology. Rather, it is defined by the degree to which individuals retain agency within technologically mediated environments.

Psychological Integrity
The Right to Remain Free from Subliminal Manipulation, Neurochemical Coercion, or Behavioral Nudging

Psychological integrity refers to the right of every individual to maintain the internal coherence, stability, and authenticity of their emotional and psychological life. Unlike cognitive liberty, which protects the initiation and control of thoughts, psychological integrity safeguards the spontaneous evolution of emotional and psychological states after they have emerged. It offers protection against subtle forms of influence that may not overtly implant new content but nonetheless alter emotional patterns and mental rhythms without full awareness or informed consent (Bublitz & Merkel, 2014; Ienca & Andorno, 2017).

Technologies such as BAI-driven behavioral nudging and adversarial neurostimulation exemplify these risks. In the case of behavioral nudging, digital assistants or wearable devices adapt to biometric and neural signals collected passively, steering user decisions on matters such as consumer choices, political preferences, or social judgments (Sunstein, 2015; Yeung, 2016). Adversarial neurostimulation, by contrast, uses external signals to elevate emotional reactivity, such as anger or compliance, below the threshold of conscious awareness (Yuste et al., 2017). While often developed with beneficial aims, these interventions can erode an individual’s capacity to form beliefs, preferences, and judgments in an uncoerced and authentic manner.

Consider, for example, an employee who uses a BAI system to enhance operational accuracy. Over time, they notice that their feelings of anger or frustration toward unfair treatment have diminished. They have no recollection of choosing to suppress these emotions. Rather than erasing emotions, the system has subtly modulated their emotional responses, reshaping how they perceive and react to experiences of injustice (Ienca & Andorno, 2017).

In such cases, cognitive liberty remains formally intact, as the person retains access to their thoughts and no external content is forcibly implanted. However, the contours of emotional life have been manipulated in ways that blur the line between authentic and conditioned response. This distinction highlights the unique role of psychological integrity, which protects not the content of thought, but the felt quality and evolution of emotional experience over time (Bublitz, 2014; Farah, 2012).

As AI systems increasingly mediate environments in which individuals think, feel, and relate to others, the preservation of psychological integrity becomes not merely a philosophical concern but a fundamental right. In AI-driven contexts, predictive algorithms, emotional analytics, and cognitive interventions shape mental and emotional landscapes, raising the risk that individuals may lose authentic ownership of their inner experiences. Without explicit safeguards to protect the autonomy and spontaneity of mental life, human agency and selfhood risk being gradually eroded. Psychological integrity must therefore be recognized as a core right, essential to preserving the uniqueness and dignity of every person in an increasingly engineered cognitive environment.

Prohibition of Coercive Use
Respecting the Right to Refuse Neuroenhancement without Systemic Exclusion

The prohibition of coercive use affirms the right of individuals to refuse neurotechnological augmentation without facing systemic exclusion or institutional disadvantage. This includes the freedom to decline using BAIs, cognitive stimulation systems, or performance-enhancing neural devices while maintaining fair access to education, employment, and public services. This principle is grounded in international bioethical frameworks that emphasize human dignity and self-determination (Andorno, 2009), and it is reinforced by concerns arising in specific domains. In the military, for instance, individuals who remain unaugmented may be excluded from advancement opportunities (Lin et al., 2013).

The principle applies not only to explicit coercion, such as making augmentation a formal requirement for employment or institutional participation, but also to structural pressures that subtly favor enhanced individuals. These include reward systems, evaluation standards, or informal incentives that make enhancement an implicit condition for success. The goal is to ensure that neuroenhancement remains a personal choice rather than becoming an unspoken prerequisite for full participation in social, economic, or professional life (Andorno, 2009; Lin et al., 2013).

Vulnerable groups such as students, military conscripts, and workers in precarious employment may face disproportionate pressure to adopt neurotechnologies presented as tools for advancement. In these circumstances, the appearance of opportunity can mask the reality of structural coercion, particularly when declining enhancement limits future prospects (Fukuyama, 2002; Ienca & Andorno, 2017).

Several real-world examples already illustrate these concerns. A soldier may be excluded from elite assignments for choosing not to undergo cognitive augmentation. A student may be assessed according to academic standards that presume enhanced cognitive abilities. An employee who remains neurally unmodified may be informally penalized during promotion evaluations. These developments are not speculative. They reflect an emerging social trajectory in which technological augmentation acts as a filter for access rather than a support for voluntary empowerment (Santoni de Sio & van den Hoven, 2018).

To prevent such outcomes, specific safeguards must be implemented. Policies should prohibit making augmentation a condition for employment, academic admission, or access to public benefits. Institutions must be transparent about how neurotechnological tools influence evaluations and decisions. Assessment systems should accommodate both augmented and non-augmented individuals, recognizing intuitive reasoning, adaptive problem-solving, and creative thinking as essential capabilities that cannot be reduced to mere enhancement. Furthermore, the rights of neurodiverse individuals must be protected from algorithmic exclusion or social stigmatization, ensuring they are not disadvantaged by systems optimized around augmentation norms (Binns et al., 2018).

The prohibition of coercive use does not reject neurotechnology or deny its potential benefits. Rather, it opposes a society where access to opportunities depends on whether a person is willing or able to undergo enhancement. It affirms a pluralistic vision of society where diverse ways of thinking, acting, and contributing are respected. It calls for a future in which individuals can choose freely, be evaluated fairly, and participate fully, regardless of their engagement with neurotechnological tools (Ienca & Andorno, 2017).

Mental Privacy
Protecting Neural Data and thought Processes from Unauthorized Surveillance or Behavioral Profiling

The principle of mental privacy addresses the need to protect neural data and thought processes from unauthorized surveillance, profiling, or commercial exploitation. As described by Ienca & Andorno (2017) and further reinforced by ethical proposals from Yuste et al. (2017), mental privacy extends to internal experiences that have not yet been expressed, emotions that have not yet been acted upon, and neural signals that were never intended for external access. According to Bublitz & Merkel (2014), even passive neural data can be harvested in ways that bypass consent and undermine personal autonomy. These risks mirror the broader concerns over algorithmic profiling discussed by Pasquale (2015), which emphasize the urgency of protecting mental domains before the brain is reduced to another commercial interface.

Emerging technologies illustrate the stakes of this principle. For example, a worker wearing a BAI to support concentration may unknowingly produce neural data indicating stress or disengagement. Human resource systems could then use that data to flag the individual as a high-risk employee, even if they never voiced concern or requested assistance (Ienca & Andorno, 2017). Similarly, a consumer using a neural interface connected to a shopping application may trigger emotional responses to images or suggestions. These responses, though subconscious, could be captured and used to tailor future advertising, not on the basis of stated preferences, but on unspoken affective signals (Yuste et al., 2017).

Even when such brain-derived data is anonymized, it can still be aggregated, resold, or analyzed to generate behavioral profiles. These profiles may influence a wide range of decisions, including targeted advertising, hiring processes, or political messaging. In such cases, the brain becomes a silent and involuntary source of institutional or commercial insight, and individuals lose control over how their neural signals are interpreted or applied (Pasquale, 2015).

The importance of mental privacy lies in the understanding that not all thoughts are intended to become actions. Not every emotion should be recorded, interpreted, or commodified. Individuals must retain the right to think in private, to explore irrational or contradictory ideas, and even to experience socially transgressive impulses without being monitored or labeled for doing so (Farah, 2012).

If mental privacy is not preserved, the distinction between thought and data collapses. Internal reflection is transformed into analytics, and the mind itself becomes a domain of passive extraction. Upholding this right means recognizing that the brain is not simply another digital interface to be optimized or monetized. It is a protected space of origin, where agency, subjectivity, and personhood begin, unobserved and uncommodified (Ienca & Andorno, 2017; Yuste et al., 2017).

Neural Safety and Reversibility
Standards Ensuring Long-Term Mental Health Protection, Biological Compatibility, and Reversibility of Interventions

Neural safety and reversibility refer to the set of ethical and technical standards designed to ensure that BAI systems are not only effective but also biologically and psychologically safe for long-term use. Because these technologies interface directly with the brain’s neural architecture, their long-term effects on both physiology and cognition must be carefully studied, continuously monitored, and responsibly mitigated (Wolpaw & Wolpaw, 2012).

Ensuring safety requires attention to multiple layers. First, biocompatibility must be evaluated. This involves determining whether the device or signal interface causes inflammation, tissue degradation, or abnormal patterns of neural activity (Birbaumer & Cohen, 2007). Second, psychological stability must be considered. Prolonged use of such systems may result in shifts in mood, behavior, emotional processing, or cognitive flexibility in ways that are difficult to anticipate or control (Gilbert, 2015). Third, the integrity of personal identity must be preserved. Users must be able to continue feeling like themselves. If augmentation disrupts self-perception or alters fundamental personality traits, this poses a serious ethical concern (Parens, 2005).

Reversibility serves as a critical safeguard in this context. Individuals must retain the right and the practical means to deactivate, disconnect, or permanently remove augmentation systems. If someone experiences psychological discomfort, emotional flattening, or a sense of detachment from their thoughts and memories, they should not be trapped within an altered cognitive state that feels unfamiliar or inauthentic.

For example, imagine users adopt BAI-based cognitive assistants to improve memory recall and planning ability. Initially, they feel sharper, more efficient, and more productive. Over time, however, they begin to rely on the system for even mundane decisions. Their confidence in their own judgment erodes. They feel less spontaneous and emotionally responsive. Eventually, they begin to question whether they are actively living their life or simply executing a program. In such a situation, the ability to disengage is not a matter of convenience but a matter of psychological and moral necessity (Parens, 2005; Gilbert, 2015).

The principle of neural safety and reversibility situates BAI technologies within the established framework of biomedical ethics. This includes informed consent, respect for patient autonomy, and the right to withdraw from interventions. Just as medical devices must be removable or adjustable in response to adverse effects, so must neural systems be designed to remain flexible, revisable, and responsive to individual experience. Systems that modify the brain must not become irreversible by design or dependency.

Taken together, these standards form a foundation for the ethical deployment of BAI. They are not only clinical requirements but moral commitments to ensure biocompatibility, psychological coherence, and the enduring right to step away from enhancement when it begins to erode one’s sense of self.

TOWARD ETHICAL INTEGRATION: DUAL-TRACK STRATEGIES FOR HUMAN AUGMENTATION

The emergence of BAI technologies and the anticipated rise of TAH one of the most consequential transformations in the history of human cognition. These systems offer the potential not only to enhance memory, attention, or decision-making, but also to reshape how people engage with knowledge, participate in work, and relate to society. While many public discussions emphasize the dangers of cognitive inequality, surveillance, and elite dominance, it is equally important to consider that, if thoughtfully governed, these technologies may also serve as tools for inclusion, restoration, and democratic renewal.

To realize this possibility, a dual-track policy strategy is proposed. This strategy balances early and equitable integration for underserved populations with strong oversight of elite applications, aiming to align innovation with justice.

The first track prioritizes access for individuals who face cognitive or neurological challenges. These may include those with developmental conditions, brain injuries, or age-related cognitive decline. For such individuals, BAI systems can support fundamental capabilities, such as communication, decision-making, and emotional regulation, that are essential for full participation in society. In these cases, augmentation is not about competitive advantage. It is about restoring agency and dignity, and recognizing enhancement as a pathway to equal opportunity rather than superiority.

This gradual, need-based deployment also offers society time to build ethical norms, legal protections, and cultural understanding. Beginning with inclusive use cases helps build public trust and demonstrates that augmentation is not inherently an instrument of privilege. Instead, it can be designed to reduce structural barriers and close social gaps.

The second track addresses high-performance applications, particularly in domains such as defense, intelligence, and high-risk industries. While these areas may benefit from increased vigilance or faster decision cycles, the stakes are higher. Without robust ethical supervision, cognitive augmentation in these environments risks amplifying power imbalances, accelerating conflict, and creating forms of psychological dependency or burnout. For this reason, elite uses must be subject to clear boundaries and consistent oversight, ensuring alignment with broader human values.

However, this strategy can only succeed if supported by the right institutional architecture. First, a public neuro-civic infrastructure must guarantee access to essential BAI systems for rehabilitation, learning, and accessibility, especially for those who would otherwise be excluded. This infrastructure does not imply universal high-end augmentation but seeks to ensure that foundational neurotechnologies are available as a matter of inclusion, not luxury.

Second, a neuroethical commons must be fostered across schools, public institutions, and community forums. Rather than treating ethics as an afterthought, this model brings citizens, educators, and professionals together in shaping the values and boundaries of neurotechnology. By embedding deliberation into the life cycle of these tools, societies can ensure that development remains responsive to diverse lived experiences and cultural worldviews.

Third, a global governance framework must be established to coordinate norms, protections, and responsibilities across borders. As neurotechnologies transcend jurisdictions, local regulations alone are insufficient. Shared principles, including cognitive liberty, psychological integrity, prohibition of coercive use, mental privacy, and neural safety and reversibility, must be embedded into international standards to prevent manipulation, inequity, exploitation, and escalation within the cognitive domain. Upholding these principles is essential to ensure that individuals retain autonomy over their mental life, preserve the continuity and integrity of their cognitive processes, and remain protected against both direct and structural forms of coercion.

Together, these three institutional pillars make the dual-track strategy viable. They ensure that BAIs are introduced not only efficiently but also equitably. Three pillars ground the development of BAIs in democratic participation, cultural respect, and ethical accountability.

In the end, the question is not simply what kinds of minds we are able to enhance. The deeper question is what kind of society we choose to build around those minds. If governance is thoughtful, inclusive, and forward-looking, then brain–AI technologies may not divide us. They may become the very tools through which we strengthen connection, capacity, and collective belonging across lines of ability, access, and identity.

“The future is not some place we are going, but one we are creating. The paths are not to be found, but made. And the activity of making them changes both the maker and the destination.”

- John Schaar -

Competing interests

No potential conflict of interest relevant to this article was reported.

Funding sources

Not applicable.

Acknowledgements

This manuscript was edited for language and style using ChatGPT, a large language model developed by OpenAI.

Availability of data and material

Upon reasonable request, the datasets of this study can be available from the corresponding author.

Authors’ contributions

The article is prepared by a single author.

Ethics approval

Not applicable.

References

1.

Andorno, R. (2009). Human dignity and human rights as a common ground for a global bioethics. The Journal of Medicine and Philosophy, 34(3), 223-240.

2.

Beniger, J. R. (1986). The control revolution: Technological and economic origins of the information society. Harvard University Press.

3.

Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14). Montréal, QU.

4.

Birbaumer, N., & Cohen, L. G. (2007). Brain–computer interfaces: Communication and restoration of movement in paralysis. The Journal of Physiology, 579(3), 621-636.

5.

Blume, S. (2009). The artificial ear: Cochlear implants and the culture of deafness. Rutgers University Press.

6.

Bublitz, J. C. (2014). Freedom of thought in the age of neuroscience. Archiv für Rechts- und Sozialphilosophie, 100(1), 1-25.

7.

Bublitz, J. C., & Merkel, R. (2014). Crimes against minds: On mental manipulations, harms and a human right to mental self-determination. Criminal Law and Philosophy, 8(1), 51-77.

8.

Chaudhary, U., Vlachos, I., Zimmermann, J. B., Espinosa, A., Tonin, A., Jaramillo-Gonzalez, A., Khalili-Ardali, M., Topka, H., Lehmberg, J., Friehs, G. M., Woodtli, A., Donoghue, J. P., & Birbaumer, N. (2022). Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training. Nature Communications, 13, 1236.

9.

Collins, R. (1998). The sociology of philosophies: A global theory of intellectual change. Harvard University Press.

10.

Farah, M. J. (2012). Neuroethics: The ethical, legal, and societal impact of neuroscience. Annual Review of Psychology, 63, 571-591.

11.

Foucault, M. (1972). The archaeology of knowledge. Pantheon Books.

12.

Fukuyama, F. (2002). Our posthuman future: Consequences of the biotechnology revolution. Farrar, Straus and Giroux.

13.

Gibson, W. (1984). Neuromancer. Ace Books.

14.

Gilbert, F. (2015). A threat to autonomy? The intrusion of predictive brain implants. AJOB Neuroscience, 6(4), 4-11.

15.

Goertzel, B. (2014). Artificial general intelligence: Concept, state of the art, and future prospects. Journal of Artificial General Intelligence, 5(1), 1-46.

16.

Hampson, R. E., Song, D., Robinson, B. S., Fetterhoff, D., Dakos, A. S., Roeder, B. M., She, X., Wicks, R. T., Witcher, M. R., Couture, D. E., Laxton, A. W., Munger-Clary, H., Popli, G., Sollman, M. J., Whitlow, C. T., Marmarelis, V. Z., Berger, T. W., & Deadwyler, S. A. (2018). Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall. Journal of Neural Engineering, 15(3), 036014.

17.

Hobsbawm, E. J. (1952). The machine breakers. Past & Present, 1(1), 57-70.

18.

International Committee of the Red Cross. (1949). The Geneva Conventions of 12 August 1949. https://www.icrc.org/sites/default/files/external/doc/en/assets/files/publications/icrc-002-0173.pdf

19.

Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 5.

20.

International Society for Stem Cell Research (ISSCR). (2016). Guidelines for stem cell research and clinical translation. https://www.isscr.org/guidelines

21.

Jasanoff, S. (2005). Designs on nature: Science and democracy in Europe and the United States. Princeton University Press.

22.

Mehlman, M., Lin, P., & Abney, K. (2013). Enhanced warfighters: Risk, ethics, and policy. Case Western Reserve University.

23.

O’Doherty, J. E., Lebedev, M. A., Ifft, P. J., Zhuang, K. Z., Shokur, S., Bleuler, H, & Nicolelis, M. A. L. (2011). Active tactile exploration using a brain–machine–brain interface. Nature, 479(7372), 228-231.

24.

Parens, E. (2005). Authenticity and ambivalence: Toward understanding the enhancement debate. Hastings Center Report, 35(3), 34-41.

25.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

26.

Pratt, B. (2021). Achieving inclusive research priority-setting: What do people with lived experience and the public think is essential? BMC Medical Ethics, 22(1), 117.

27.

Racine, E. (2010). Pragmatic neuroethics: Improving treatment and understanding of the mind-brain. The MIT Press.

28.

Sale, K. (1995). Rebels against the future: The Luddites and their war on the industrial revolution. Addison-Wesley.

29.

Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 5, 15.

30.

Saran, R., Li, Y., Robinson, B., Abbott, K. C., Agodoa, L. Y., Ayanian, J., Balkrishnan, R., Bragg-Gresham, J., Chen, J. T. L., Cope, E., Gipson, D., He, K., Herman, W., Heung, M., Hirth, R. A., Jacobsen, S.S., Kalantar-Zadeh, K., Kovesdy, C. P., Leichtman, A. B., Lu, Y., Molnar, M. Z., Morgenstern, H., Nallamothu, B., O’Hare, A. M., Pisoni, R., Plattner, B., Port, F. K., Rao, P., Rhee, C. M., Schaubel, D. E., Selewski, D. T., Shahinian, V., Sim, J. J., Song, P., Streja, E., Tamura, M. K., Tentori, F., Eggers, P. W., Agodoa, L. Y. C., & Abbott, K. C. (2015). US Renal Data System 2014 Annual Data Report: Epidemiology of kidney disease in the United States. American Journal of Kidney Diseases, 66(1 Suppl 1), S1-S305.

31.

Sententia, L. (2004). Neuroethical considerations: Cognitive liberty and freedom of thought. Journal of Cognitive Liberties, 5(1), 115-120.

32.

Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the 21st century. Penguin Books.

33.

Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568-1580.

34.

Sunstein, C. R. (2015). The ethics of influence: Government in the age of behavioral science. Cambridge University Press.

35.

United Network for Organ Sharing (UNOS). (2015). Ethical principles in the allocation of human organs. https://optn.transplant.hrsa.gov/professionals/by-topic/ethical-considerations/ethical-principles-in-the-allocation-of-human-organs/

36.

Whannell, L. (2018). Upgrade. Blumhouse Productions.

37.

Wolpaw, J. R., & Wolpaw, E. W. (2012). Brain-computer interfaces: Principles and practice. Oxford University Press.

38.

World Health Organization (WHO). (2001). The world health report 2001 – Mental health: New understanding, new hope. WHO.

39.

Wright, E. O. (1997). Class counts: Comparative studies in class analysis. Cambridge University Press.

40.

Yeung, K. (2016). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118-136.

41.

Yuste, R., Goering, S., Arcas, B. A., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., Friesen, P., Gallant, J., Huggins, J. E., Illes, J., Kellmeyer, P., Klein, E., Marblestone, A., Mitchell, C., Parens, E., Pham, M., Rubel, A., Sadato, N., Sullivan, L. S., Teicher, M., Wasserman, D., Wexler, A., Whittaker, M., & Wolpaw, J. (2017). Four ethical priorities for neurotechnologies and AI. Nature, 551(7679), 159-163.