provocationofmind.com

Understanding AI Through Four Crucial Filters

Written on

This piece is an excerpt from The Algorithmic Bridge, an educational newsletter designed to connect algorithms with everyday individuals. It aims to clarify AI's influence on our lives and equip readers with the tools necessary to navigate the future effectively.

The Algorithmic Bridge was established to address a significant gap: educating non-experts about AI, especially regarding its interactions with various aspects of life. It focuses on the broader implications of AI beyond its technical details.

I firmly believe that a foundational understanding of AI is essential for everyone—not necessarily the intricate technicalities, but its effects on our daily experiences. That's the core objective of The Algorithmic Bridge.

Despite my emphasis on applying "critical thinking" and maintaining "healthy skepticism" while learning about AI, I acknowledge that I've never detailed the practical steps to achieve this.

This article aims to elucidate the "how" by introducing four critical filters.

When I apply my "critical thinking" lens, I identify four distinct filters that hinder my understanding of the true nature of AI. These filters may not always be evident, but when they are, it’s crucial to remain vigilant.

I consciously apply these filters when I consume news articles, corporate announcements, research papers, books, and more. Neglecting them can distort my perception, leading to a false sense of understanding.

Today, I will illustrate these filters with examples, explaining their existence and how to recognize them.

While I may be biased, I consider this a vital guide for anyone looking to sift through the noise in AI discussions. If you want to differentiate between marketing hype and genuine insights, keeping this framework in mind will be invaluable.

(Note: It's important to acknowledge that The Algorithmic Bridge—and this article—are also influenced by these filters. Although I strive for objectivity, achieving it fully is challenging. For instance, to engage readers, I often craft articles to be appealing, which may conflict with complete impartiality.)

Knowledge is Not Neutral

Before diving into the filters, it's essential to clarify why this discussion is critical.

The primary reason is straightforward: knowledge is not objective. AI, being a highly strategic technology, amplifies this non-neutrality. Understanding how it becomes biased can help mitigate those biases.

While it may seem evident, many individuals perceive AI through a simplistic lens—as merely an emerging technology.

In reality, AI intertwines with global economic dynamics, geopolitical relations, and cultural evolution.

Accepting this premise implies that individuals involved in AI—be it executives, researchers, or government officials—are motivated by specific interests.

Where interests exist, there is often a tendency to manipulate information and knowledge accordingly. This manipulation might not always equate to outright deception; instead, it often comprises half-truths, selective emphases, and exaggerations.

It is crucial for you, as the knowledge consumer, to recognize these biases, which arise from the four filters I will discuss.

Now, let’s explore these filters in the order they typically manifest (without delving into the financial motivations behind them, in order to maintain engaging explanations).

1. The Enhancement of Findings

The first filter emerges from academia and industry researchers, who often emphasize their most favorable results. They may also have personal motivations to portray their findings as significant.

Highlighting Positive Results

A recent example is Meta's Galactica, a language model that was touted as a groundbreaking tool for "organizing science" based on impressive benchmark performance.

However, when AI experts evaluated it, they found it produced mostly nonsensical outputs. The authors' choice to showcase only the model's best examples obscured its substantial shortcomings.

It wasn't until substantial public backlash that the extent of Galactica's failures became clear. Researchers, adhering to common practices, presented their findings in the most favorable light, which misled those who skimmed the documents or utilized the model without sufficient scientific rigor.

Seeking Significance

Researchers, like anyone else, are driven by personal aspirations and needs. These motivations can significantly influence their scientific endeavors and interpretations of their findings.

For example, a scientist who believes that artificial general intelligence (AGI) will emerge in five years may perceive AI's capabilities differently than one who thinks it is still decades away.

While it is impossible to account for each author's personal motivations every time you read a study, it's essential to remain aware of this potential bias.

2. The Tendency to Overstate

The second filter arises from the institutions to which researchers belong—such as companies, universities, and governments. I will focus primarily on private companies, as their incentives tend to be more pronounced.

AI companies and their representatives often have the motivation to exaggerate the capabilities of their technologies.

This contrasts with merely highlighting the best results. Overclaiming means asserting that an AI model can perform tasks it is not actually capable of.

For instance, while discussing Galactica, I noted:

> "We must discern the differences between what the authors claim Galactica can do (but can't) and what it genuinely accomplishes."

Notably, the term "reasoning" appears numerous times in the model's documentation, yet it lacks this capability altogether—a clear instance of overclaiming.

This tendency is prevalent in the generative AI space, where terms like "understand," "reason," and "think" are frequently misapplied.

Another notorious example of overstatement occurred when Ilya Sutskever suggested that AI could achieve consciousness. Such unfounded claims can lead to widespread misconceptions about AI's actual capabilities.

3. The Allure of Sensationalism

The third filter is shaped by how information is disseminated.

This filter is perhaps the most visible and pervasive since many non-experts rely on news outlets for AI updates rather than reading academic papers or corporate reports.

While news outlets are generally incentivized to present truthful information, the pressure to attract readers often leads to sensationalized headlines.

As public expectations rise for groundbreaking AI advancements, media outlets may feel compelled to exaggerate their stories to drive engagement.

This tendency is not exclusive to major news sources; bloggers, freelance writers, and content creators can also succumb to this pressure.

While reputable outlets usually maintain journalistic standards, the widespread availability of online platforms makes it difficult to ascertain the accuracy of the information being presented.

4. The Human Tendency to Anthropomorphize

The final filter is intrinsic to all humans, including you and me.

We have an evolutionary predisposition to attribute human characteristics to entities that exhibit even slight human-like traits, such as agency or communication skills.

AI systems, particularly advanced language models like GPT-3, often trigger this anthropomorphism due to their seemingly human-like responses.

For instance, an exchange between GPT-3 and another AI model, J1-Jumbo, appeared eerily similar to human conversation, leading some readers to perceive them as sentient beings.

Even if researchers present a balanced view of their findings, companies accurately represent their technology, and news outlets adhere to factual reporting, the tendency to anthropomorphize can still cloud judgment.

Conclusions

AI is poised to alter our world significantly. As noted by Google's CEO, it may prove more transformative than "electricity and fire." Philosopher Nick Bostrom has even suggested it could be "the last invention" we ever need.

These comparisons are difficult to quantify, but one thing is clear: understanding AI is crucial.

By approaching AI literature with awareness of these filters, you will be better equipped to discern the motivations of various stakeholders and grasp how they shape perceptions, ultimately uncovering the underlying truths.

While this framework may not provide a foolproof method for uncovering truth, it serves as a valuable guide for navigating the complexities of AI.

To stay informed, consider subscribing to The Algorithmic Bridge, a newsletter designed to connect algorithms with the people they impact.

You can also support my work on Medium directly by becoming a member using my referral link here! :)

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

The Ingenious Earthquake-Resistant Techniques of the Incas

Discover how Inca construction methods, created 600 years ago, continue to inform modern earthquake-resistant engineering.

Embracing the Divine Tapestry of Angels in Our Lives

Explore the duality of 'Angels' in our lives, revealing how both positive and challenging influences shape our spiritual journey.

Embracing Hibernation: A Path to Health and Renewal

Discover the benefits of hibernation for your health and well-being this New Year.

Exploring Molyneux's Problem: A Philosophical Inquiry

Delve into Molyneux's problem, a philosophical challenge about perception and recognition.

Pursuing Your Dreams: Uncovering Your True Aspirations

Discover the essence of living your dreams and how to identify them, regardless of their scale or nature.

Unlocking Potential: 5 Coaching Questions to Overcome Stagnation

Discover five impactful coaching questions to help clients move past feeling stuck and embrace new possibilities.

Mastering Product Roadmapping in FAANG: A Guide for Managers

Discover essential tips for creating effective product roadmaps as a Product Manager at a FAANG company.

The Future of Data Science: Trends to Watch in 2023

Explore the key trends in data science for 2023, including AI advancements, data governance, and the rise of cloud computing.