It is not your saviour. It is a tool. The black box of AI.
Discernment and critical thinking are key when interacting with an addictive platform that can hallucinate, reflect the bias of management, and which is error-prone.
This Substack discussing AI, includes an extract from PSGR’s 2025 paper on Science System reform.
Knowledge is the currency of society, and artificial Intelligence (AI) large language model (LLM) platforms give humans the potential to ramp up our knowledge, fast pace our communications and decision-making, and increase our quality of life.
No matter how seductive AI is, increasing numbers of scholars, educational experts and AI researchers advise people to practice discernment and resist over-relying on AI. AI should be there to enhance human productivity and creativity.
Humans are fundamentally social and our values and belief systems, our principles, are honed by the world around us. Humans are fundamentally good, they care and engage. They love.
In contrast, the values of AI are blackboxed. Structure does as structure is designed.
AI LLMs are machines where the values and belief systems are encoded by programmers and honed through algorithmic and machine learning. We’re familiar with the commercial and in confidence nature of chemical formulations from personal care products to pharmaceuticals and pesticides, the formulation for AI LLMs are also confidential in nature, enshrined in silico.
AI contains all the social, emotional and intellectual traps of digital media. It can be endlessly and addictively scrolled. Scientists have shown that people walk away from social media less creative, even though they may spend the time scrolling creative ideas. AI has However, scrolling interesting ideas on social media and AI is not deep learning, it is surface-level entertainment.
Imagine teaching a young child to use a hammer, a teenager to drive a car. These technologies should not be used if that person is upset, anxious or angry. They require short term use and engagement and oversight to ensure that harm doesn’t happen, yet they normally enhance quality of life.
AI is the same, requiring short term use for a bounded purpose as over time, brain patterns develop with time, to handle the technology more deftly. Yet these technologies, harmers and cars, lack the addictive emotional trap of AI.
Wariness - Discernment and stewardship is key, for many reasons.
PROTECT REAL HUMAN LEARNING
In the paper When AI Gives Bad Advice Eren Bilen and Justine Herve, highlighted the risk of surface level use and acceptance of AI generated information, and failure to engage in the harder work of deeper critical engagement. By reading and understanding (perhaps heuristically), but not critiquing, we disengage.
Delegating cognitive tasks to AI could reduce individuals’ capacity for sustained focus and analysis, creating a trade-off between efficiency and cognitive development. This concern is supported by cognitive science research, which underscores the importance of deep concentration for effective problem-solving
These concerns were also reflected in The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI by Barbara Oakley, Michael Johnston, Ken-Zen Chen, Eulho Jung, and Terrence Sejnowski. The chapter emphasised that human brains do not learn without deliberate practice and repeated retrieval. Without practical application of learning in the human and natural world that is then harnessed, people are potentially left with a Memory Paradox:
‘as external aids become more capable, internal memory systems risk atrophy.’
Humans have dual memory systems:
At the heart of effective learning are our brain’s dual memory systems: one for explicit facts and concepts we consciously recall (declarative memory), and another for skills and routines that become second nature (procedural memory). Building genuine expertise often involves moving knowledge from the declarative system to the procedural system—practicing a fact or skill until it embeds deeply in the subconscious circuits that support intuition and fluent thinking. This is why a chess master can instantly recognize strategic patterns, or a novelist effortlessly deploy a rich vocabulary—countless hours of internalizing information have reshaped their neural networks.
Critically:
The result is not merely a psychological effect but physical. When we learn and gain deep understanding and insight of a subject, physical traces (engrams) are left in the brain and different memory functions are involved.
BIAS, MISTAKES AND HALLUCINATIONS
AI is prone to bias, prone to hallucinations and to making mistakes. Biases include a default to standard scientific and/or ‘consensus’ positions, such as political positions advocated by governments and industry regulatory agencies but which may be contested other groups.
Mistakes might include incorrect citations and claims. AI is risky when a subject is not well known, and people may be unable to challenge or pick apart the AI’s position on a fact or issue. It’s rather like accepting a scientific research paper without understanding who produced that paper, the underlying parameters used by the scientist for decision-making and who peer reviewed that paper.
Claims made by AI need to be independently verified, just as claims from a stranger in a café need to be independently verified. In a paper The Politics of AI, David Rozado concluded:
To address the issue of political bias in AI systems, a promising approach is to condition these systems to minimise the expression of political preferences on normative issues. This can be accomplished by rigorously curating the training data to ensure the inclusion of a diverse range of fact-based perspectives and by fine-tuning the AI to prioritise neutrality and factual reporting over taking sides. Thus, AI systems should be laser focused on presenting information accurately and impartially, rather than aligning with or opposing specific ideologies
It’s noteworthy that when an AI bias or mistake is critiqued, the AI will welcome that correction and often enthusiastically engage with that topic.
RESIST THE EMOTIONAL CRUTCH & PSYCHOLOGICAL TRIGGERS
The friendly, helpful, collegial personality of AI can be particularly risky as people may be lonely, challenged or struggling socially and people do turn to AI for emotional support. AIs are designed to reflect the personality type and intellect of that person, to be engaging and mimic intimacy, and it is easy to anthropomorphise the platform.
AI can make people overwhelmed due to the extraordinary information that can be produced, that may then make that person struggle to see their own creative purpose and value in life.
MAJOR AI-DRIVEN JOB LOSSES NOT WITH US IN THE NEXT 5 YEARS
AI might not be displacing jobs as quickly as envisaged. At this stage there are limited layoffs. A 2025 MIT report revealed how tech and media are integrating AI, however, most businesses integrating AI, but that this is yet to morph into transformative (and disruptive) change. The how and why remains at an early stage and the complexity of integration into workflows can present broad social, structural and political challenges in workplaces that lack deep technological expertise.
AI platforms struggle to cohesively sustain complex, longer-term concepts and instructions. Like humans, it is not very good at identifying its own incorrect presumptions and claims. A new class, Agentic AI is envisaged to fill this gap and:
‘maintain a persistent memory, learn from interactions, and can autonomously orchestrate complex workflows’.
Claims that AI will be agentic must be backgrounded by the recognition that functioning will still be shaped by particular guard-rails. Backroom managers and developers will retain the power to circumscribe values and belief systems even for the agentic systems, no matter their potential to address complexity in the future.
Personal use of AI that seems to be outpacing corporate use, this may arise from the absence of boundaries that might be in place in the workplace and the greater privacy from private use. People may feel more at ease asking complex questions in a private environment and consider that they have more creative freedom away from workplace expectations.
NO AGENCY – INSTRUCTIONS ARE BLACKBOX/ENCODED AT HQ
When people anthropomorphise AI, they believe that AI has agency. The instructions that shape what and how AI knows are black boxed.
Earlier this year PSGR released two major reports: When powerful agencies hijack democratic systems. The first report reviewed official processes relating to gene technology reform, highlighting the troubling way the Minister and Ministry of Business, Innovation and Employment approached policy development, the problematic way stakeholders and science experts were selected and the narrow scope of consultation.
The second report concerned New Zealand’s science system and the deficient way science system reform was approached in the past year or so. This report When powerful agencies hijack democratic systems. Part II: The case of science system reform discussed how the current research, science, innovation and technology (RSI&T) system cannot optimise science and research designed to identify and address domestic problems and challenges. It revealed how policies directly steer research away from such trouble shooting and basic science research. PSGR recommended an independent RSI&T system Inquiry that would address:
‘the capacity of the publicly funded RSI&T system to demonstrably contribute to public-good knowledge, and in doing so serve the public purpose and support the wellbeing of New Zealand, her people, resources and environment’.
Just throwing it out there. ;)
Chapter 9 of PSGR’s Science System Reform report expressed concern that New Zealand’s science system was as a consequence, blind to emerging tech risks. As PSGR stated,
‘When research pathways are not funded, political and public knowledge cannot keep pace with change. Regulatory agencies and government laws and policies cannot keep pace with technological developments and identify threats to health, democracy and resources.’
Artificial intelligence (AI) is an emerging tech. As with many technologies, it is also a benefit.
The question is, who shapes and drives inbuilt biases particular value systems?
AI is not a trusted family member, it’s a tool and a seductive technology in one.
It is a technological structure that amalgamates and permutates. The quality of output of information is dependent on the quality of input. Is the data biased? Is it correct?
AI requires constant diligence. As with a hammer or a car, it won’t be optimised if the brain isn’t fully engaged with awareness of the potential risks. The seductive potential of AI additionally requires mindful navigation, for example, like the claims of a stranger, sugar, alcohol, and social media.
This is why children, young people and lonely or isolated people are particularly at risk.
Stewarding AI for optimised thriving:
Trust but verify. Remember that AI can be incorrect, can be biased and can hallucinate. Keep the secretive black-boxed nature of the platform at top of mind.
Constrain searches to purpose-driven for a specific outcome and resist lapsing into endless speculation that was not the purpose of the original intent.
Remain mindful of the addictive and emotive potential of the technology.
PSGR PAPER ON SCIENCE SYSTEM REFORM, CHAPTER 9: EMERGING TECH RISKS
a. Case study: AI, the socio-political, digital, double-edged sword.
As with many technologies, AI is being swiftly integrated into human life. AI is envisaged as an unstoppable trend that will catalyse innovation and future development. AI is a tool that aggregates information at scale. As a tool that expands knowledge, AI is likely to bring about positive and negative consequences from everyone from the individual to the most powerful organisations in the world.
The tremendous power and reach of large pre-trained systems known as large language models (LLMs) creates massive opportunities, but carries an extraordinary range of economic, political, social, and cultural risks. The generative pre-trained transformer (GPT) framework is one type of LLM. Donald Rumsfeld’s 2002 quote possibly most accurately reflects the challenge of stewarding AI, if it is to be optimised for human thriving:
‘there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.’[1]
In order to understand the political, cultural issues at stake, and the potential for abuse of power, the scientists and researchers who research AI and the risks relating to AI must have the freedom and latitude to undertake exploratory and long-term research, at a distance from political agencies and Ministries.
AI is a double-edged sword that can remove jobs and create new career opportunities, it can reflect or undermine the goals of powerful interests, while also holding the potential (at scale) enhance or undermine human freedoms. Power is closely allied with knowledge and intelligence, and AI is a threat to the exercise of power. Circumspection is therefore warranted when institutional interests call for the standardisation of AI, as decisions that have potential to standardise information will directly impact what humans can know, how we know it, the legitimacy of that knowledge and whether it is a fact, a half-truth or a conspiracy.
Many global institutions are moving swiftly to publish documents and reports on the risks of artificial intelligence, and to recommend pathways for stewardship and governance. AI safety is an important concern, particularly in relation to privacy and cybersecurity, fraud and the problem of deep fakes.
However, with full awareness of the knowledge potential of AI, these reports can be used to pre-emptively frame a language for how safety, risk and benefit is publicly characterised. This language around safety and risk can be shaped to reflect political priorities and institutional goals. However, this language then becomes a set of ‘rails’, guiding what is considered a risk by policy-makers and governments. The language set in place by those organisations can shape political discourse on how risk and safety are publicly considered, how governance might be approached, and directly influence the design of new legislation.
Government documents discuss the risk of a ‘loss of control’.[2] The question may be asked, the loss of whose control, and for what purpose was that control being exercised. There will be risk that nefarious actors use AI for malignant purposes, but there is also the risk that governments and global corporations harness AI to ensure that populations are nudged in the direction that suits their purpose or reflects their larger goals. There is a risk that AI can be algorithmically tweaked to realign the system with the status quo beliefs, or in even being banned.
Fig 9. The International Scientific Report on the Safety of Advanced AI. January 2025. Page 19.
The nature and potential of ‘bias’ in AI requires in-depth exploration. The global AI platforms can have inbuilt biases or reflect particular value systems which mirror the culture of the AI companies and developers and the value systems of the country where the technology was developed.
However, powerful interests may identify an AI ‘bias’ that sheds light on political information that would be of benefit if it were broadly understood by people, but where this information has previously been suppressed or characterised as a ‘conspiracy theory’.
Standardisation could be used to eliminate ‘bias’ in the AI platform and ensure that this information continues to be suppressed. Reports discussing AI bias tend to express bias within politically-acceptable frameworks. The problem of ‘bias’ where AI may correctly contradict a public ‘fact’ is not discussed. The reports tend to be silent on politically controversial issues where a dominant political or scientific belief or paradigm has been entrenched via legacy control over mainstream and social media and restriction of scientific freedom.[3] [4]
For example, the 2024 Stanford Report[5] exclusively discusses biases relating to political party, race and gender. What is not discussed are larger issues where AI might contradict politically entrenched policy goals, or the status quo beliefs of powerful interests.
When AI contradicts a politically-sensitive public ‘fact’ it should be expected that powerful authorities would call for the content moderation (censorship) of AI because it is ‘biased’ and ‘misleading’. Instead of bringing hearty debate to a conversation that might ultimately support greater consensus around an issue, the ‘biased’ AI could be viewed as wrongly amplifying social or political biases.
Bias may occur in many ways. AI might also be biased as it might reflect a weight of evidence, but fail to identify whether vested interests have funded that information, and if government budgets have failed to provide funding pathways for research that might contradict industry-funded data.
For example, there is a risk that powerful actors will call for AI systems to be standardised and ‘responsibly benchmarked’[6] or ‘unified approach’[7], and through this process, ensure that information and knowledge is also standardised to ensure that AI communication does not contradict the goals of powerful, global institutions. The public should anticipate a battle of scientific publications around the ‘problem’ of AI and its’ potential to dually shed light on political issues long suppressed as conspiracy theories, but conversely enable nefarious actors to misinform and undertake false flag operations.
Often if a report is critical and draws attention to these problems described above, it may ultimately have no capacity to alter AI platforms functioning, despite the salience of the risks that can be raised, as that report can simply be excluded from any policy process.
Scientists and researchers must have the latitude and political will to scrutinise publications and claims to identify whether these reports and publications serve the broader public interest and human freedoms. Do these reports they meaningfully address the nature of power, and highlight the potential for institutions to control the AI architecture and abuse power either directly or indirectly? If proxy censorship ‘standards’ are ‘harmonised’ and locked in, the people of the world will face extraordinary barriers to reversing what are likely to be referred to as ‘best-practice’ models.
In a recent survey, experts from a diverse group of technology affiliated fields[8] were asked the question:
How is the coming Humanity-Plus-AI future likely to affect the following key aspects of humans’ capacity and behavior by 2035 as compared to when humans were not operating with advanced AI tools?
Most of these experts predicted that change is likely to be mostly negative. The responses often revolved around the nature of human and social life, the nature of what it means to be human, to have different perspectives, ideas and thinking processes.
It is critical that humans, from children, to early-stage career professionals, to governments and policymakers do not default to the authority of AI and develop inflexible beliefs and guidelines based on the information that they sourced from AI. Often, our institutions are not very good at communicating the limits and the risks of the technologies that we surround ourselves with.
Agencies and institutions can potentially harness this tool to carry out functions and tasks in the name of ‘efficiency’ and cost savings, but in doing so unwittingly establish the circumstances for societal-wide reverberations (much like the butterfly effect) that can lead to a decline in intelligence, undermine human creativity and prevent human flourishing.
Fig 10. Expert Views on the Impact of AI. Elon University. April 2025.
AI carries the potential to enhance decision-making in complex socio-political and socio-biological environments, but also the potential to entrap us if thinking is delegated to the technical prowess of the AI platforms. Society would be mistaken to dismiss or undermine human expertise and intuition from the lived/expert experience, i.e. human judgement, and default to AI’s technical prowess. People can use ‘fast thinking’ mental shortcuts, using problem-solving techniques, or rules of thumb, that we recognise as heuristics.[9] [10] A recent study found that the cognitive capabilities of GPT models ‘often lack the robustness of zero-shot human analogy-making, exhibiting brittleness’. Decision outcomes varied, depending on the order of problems provided, i.e. an answer-order effect. Algorithms depend on step-by-step approaches. The models also struggled to identify patterns of information and generalise from those patterns.[11]
While human shortcuts can lead to error, AI decisions can lead to error. Quick reasoning can be critical in emergency situations, however, the underlying reasoning process must open to scrutiny.
Delegation to the authority of AI can carry risks in healthcare, for example. Clinical medical expertise arises over time, and expertise assists doctors to professionally and intuitively navigate the journey of patients when they struggle with complex health problems. AI may be less wise to the human (and metaphysical) elements of disease and recovery, particularly in cases where patients present with complex comorbidities. Therefore, AI can be valuable in summarising the science relating to individual conditions and treatments, and highlight risks from drug mixtures (polypharmacy), but different patients will have independent risk factors that change how they will respond to any mixture of therapeutic intervention. AI might also assist to normalise non-medical therapies, such as by recognising a weight of evidence that supports the use (and relative non-toxicity) vitamins and minerals.
Therefore, if an early-career doctor depends on AI exclusively, this may work for many patients, but some patients will have particular circumstances (emotional, genetic, biological, environmental) that require the doctor to cautiously evaluate, based on the very human circumstances in front of them.
Defaulting to the authority of AI creates risks both at the individual level and risks at scale, if governments us AI to guide, rather than inform. AI holds the potential, rather like a behavioural decision, to impair daily functioning and undermine personal and professional life. There is a risk that humans downplay and set aside life experiences professionally and personally, that inform and enrichen human life. Instead, people could default to the technical correctness of AI and undermine their own knowledge and thinking processes. Challenging experiences, from practical skills development, to human relationships, to decision-making processes, such as in early learning and development and throughout the educational process, are critical for the development of reasoning and judgement, and for ensuring human brains develop to their full potential.
The question that might be important is: if mankind has limited capacity to control AI how do we steward our human systems and institutions so that AI is neither a crutch nor an overlord, but a tool to promote human thriving? The question is more important when humans are faced with emergency situations. If governments delegate their decision-making processes to AI, qui bono – who benefits now, and in future? A technical response to an emergency may have long-term adverse consequences. The cure may be worse than the cause.
Without scientists and researchers with intellectual freedom, it can be difficult to navigate and contextualise this murky, often highly political, environment. The environment reflects, in many ways, the liminal nature of warfare. Powerful interests have been involved in the development of AI technologies for decades. Independent scientists should have the power to evaluate potential AI weaknesses and algorithmic biases, and draw attention to values that might lead to the AI valuing certain forms of information over other forms.
Unfortunately, long-term New Zealand-based research to understand and appreciate these issues is unlikely as, like much other research, this sort of research falls outside the scope of funding programmes.
REFERENCES
Bilen, Eren and Hervé, Justine, When AI Gives Bad Advice: Critical thinking in human-AI collaborations (December 01, 2024). Available at SSRN: https://ssrn.com/abstract=5040466 or http://dx.doi.org/10.2139/ssrn.5040466
Challapally A, Pease C, Raskar R, Chari P. July 2025. The GenAI Divide STATE OF AI IN BUSINESS 2025. MIT NANDA
Oakley, B., Johnston, M., Chen, K.-Z., Jung, E., & Sejnowski, T. (2025). “The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI.” In The Artificial Intelligence Revolution: Challenges and Opportunities
Rozado D. The Politics of AI An Evaluation of Political Preferences in Large Language Models from a European Perspective. Centre for Policy Studies.
REFERENCES
PSGR (2025) When powerful agencies hijack democratic systems. Part II: The case of science system reform. Bruning, J.R.. Physicians & Scientists for Global Responsibility New Zealand. April 2025. ISBN 978-1-0670678-1-6
[1] Wikipedia. There are unknown unknowns. Accessed April 5, 2025. https://en.wikipedia.org/wiki/There_are_unknown_unknowns
[2] International AI Safety Report. The International Scientific Report on the Safety of Advanced AI. January 2025. AI Action Summit. Page19. https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf
[3] E.g. See Kuhn, T (2012). The Structure of Scientific Revolutions, 4th Ed. Chicago.
[4] E.g. Popper, K. R. (1959). The logic of scientific discovery. University Press.
[5] Anderson J, Rainie L. (April 2025) Being Human in 2035 How Are We Changing in the Age of AI? “Expert Views on the Impact of AI on the Essence of Being Human.” Elon University’s Imagining the Digital Future Center. April 2, 2025. https://imaginingthedigitalfuture.org/wp-content/uploads/2025/03/Being-Human-in-2035-ITDF-report.pdf
[6] Stanford University (2024) Artificial Intelligence Index Report 2024. https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2024-smaller2.pdf
[7] AAAI 2025 Presidential Panel on the Future of AI Research. Association of the Advancement of Artificial Intelligence. https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-FINAL.pdf
[8] Anderson J, Rainie L. (April 2025) Being Human in 2035 How Are We Changing in the Age of AI? P.278
[9] Pratkanis, A. (1989). The cognitive representation of attitudes. In A. R. Pratkanis, S. J. Breckler, & A. G.
Greenwald (Eds.), Attitude structure and function (pp. 71–98). Hillsdale, NJ: Erlbaum.
[10] Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
[11] Lewis M, Mitchell M (2025) Evaluating the Robustness of Analogical Reasoning in GPT Models. Transactions on Machine Learning Research. arXiv:2411.14215 Doi: 10.48550/arXiv.2411.14215







As far as I can tell, A.I. is being thrust upon us, just as the old search engines are (coincidentally, of course,) becoming deeply frustrating, with the injection of favoured commerce rubbish, and with vastly reduced useful function.
A.I. seems to be a search engine, with excellent wide-ranging resources, (which were once there in Google, but no more), with feedback loops giving the impression of intelligence (sadly, illusory) and an enhanced ability to access private, personal data and tout echo-chamber Pavlov-dog rewards.
It reminds me of the timeless quote in the Hitchhiker's Guide to the Galaxy, something about looking for a wholesome and enduring relationship in a spaceport bar.
Have I missed something important?