The problem with AI is us

Produced by DALL-E

”If knowledge can create problems, it is not through ignorance that we can solve them.” Isaac Asimov

As organisations rush to embrace generative AI, a surprising obstacle emerges: ourselves. While AI’s potential seems boundless, human nature and societal structures often stand in the way of its most ambitious proposals. Here are three key ways in which we, as humans, limit AI’s effectiveness:

1. Implementation Roadblocks

Consider an AI system proposing a comprehensive solution to urban homelessness. Despite being data-driven and cost-effective, implementation faces significant human-centric challenges:

  • Social resistance: Local communities oppose new housing developments (NIMBY-ism)
  • Political complexity: Legislators struggle to enact necessary policy changes
  • Institutional inertia: Existing systems resist rapid, transformative changes

As Navneet Alang notes in a recent Guardian article, while AI shows promise in areas like optimising solar panel placement in India, the main obstacles are often “the lack of resources, the absence of political will, the power of entrenched interests and, more plainly, money.”

Alang astutely observes: “In some cases, the solutions to these problems are superficially simple. Homelessness, for example, is reduced when there are more and cheaper homes. But the fixes are difficult to implement because of social and political forces, not a lack of insight, thinking, or novelty.”

2. Unrealistic Expectations

There’s a growing tendency towards “tech solutionism” – a term coined by Evgeny Morozov to describe the view of technology as a cure-all for complex societal issues. This mindset often leads to unrealistic expectations of AI’s capabilities.

We risk viewing these systems as infallible oracles, capable of solving intricate societal problems with a few prompts. This misalignment between expectation and reality can result in disillusionment when AI-generated solutions face real-world challenges.

The truth is, many global challenges aren’t primarily caused by a lack of intelligence or computing power. The solutions are often known, but implementation is hindered by human and societal factors that AI alone cannot overcome.

3. Inherited Bias

Perhaps the most insidious way we limit AI’s potential is through the biases we unknowingly embed in these systems from the start. Large Language Models (LLMs), trained on vast amounts of human-generated data, inadvertently absorb and perpetuate our prejudices and biases.

This “inheritance” can lead to skewed outputs, unfair decision-making, and the reinforcement of existing societal inequalities. For instance, an AI system trained on historical hiring data might perpetuate gender or racial biases in job candidate recommendations, mirroring past discriminatory practices.

The challenge of bias in AI is a stark reminder that these systems are not objective oracles, but rather mirrors reflecting our own flawed perspectives and societal inequities. Addressing this requires not just technical solutions, but a deep examination of the data we feed into these systems and the societal contexts in which they operate.

As we stand at this technological frontier, the question isn’t whether AI is capable, but whether we are prepared to evolve alongside it – not just technologically, but ethically, socially, and politically as well. To effectively leverage generative AI, we must recognise and address these human factors, cultivating a culture that not only embraces technological change but also critically examines the limitations inherent in both the AI systems and ourselves.

> END OF LINE
The problem with AI is us
Older post

Basic AI training

Basic AI training can help when trying to coordinate the use of AI tools across and between teams.

Newer post

Too many transcription agents

Getting overwhelmed by all the AI transcription agents joining your meetings? You're not alone.

The problem with AI is us