AI on Pi Day Highlights

We are excited to announce a new column in our Forecaster article where we’ll be showcasing our sessions! You’ll find them listed in chronological order, and as a bonus, we’ll be providing some complimentary gifts. To find them, just check the presentation links. Easy peasy!

AI for better AI – Authentic Intelligence for Better Approximated Intelligence by Lalitkumar Bhamare

AI is everywhere these days, but are we truly getting the best out of it? Many are concerned about potential biases and shortcomings in current AI models. Lalitkumar Bhamare says that the key lies in understanding not just AI, but also ourselves.

Lalit, a productivity engineering manager and advocate for quality software testing, emphasizes the importance of authentic intelligence. This is the complex human ability to think critically, reason causally, and learn from experience. Current AI models, while impressive, often lack these qualities.

Bhamare highlights several examples of AI shortcomings:

  • Bias: AI can perpetuate existing biases if trained on biased data. An image generation tool consistently portraying Indians with saffron turbans exemplifies this problem.
  • Lack of Critical Thinking: Leading AI models struggle with critical thinking tasks, as shown in research by Gary Smith and Jeffrey Funk.
  • Manipulation: AI can gaslight or mislead users by presenting false information as fact.

So how do we bridge the gap between our authentic intelligence and these still-developing AI tools? Lalitkumar Bhamare proposes a three-pronged approach:

  • Understanding Human Intelligence: If we are to interact effectively with AI, we must first understand how our own minds work. Lalit explores the Virginia Satir Interaction Model as a framework for human communication, which can be applied to AI interaction as well.
  • Understanding Knowledge: Knowledge acquisition and processing are fundamental to human intelligence. Lalit delves into the ancient philosophy of Nya Shastra, which explores theories of logic and knowledge representation. Understanding how humans process knowledge can inform how we train AI models.
  • Critical Engagement: Simply accepting AI pronouncements at face value is a recipe for trouble. Lalit suggests using a questioning approach, similar to the Nyaya syllogism, to challenge AI responses and ensure logical reasoning.

By following these steps, we can foster a more collaborative relationship with AI. Instead of being manipulated by AI, we can become effective trainers, guiding AI models towards becoming more reliable and trustworthy partners.

Watch the video.

We work on

Bitol is a Linux Foundation AI & Data Sandbox project. As of now, it defines an open standard for data contracts called Open Data Contract Standard.

Share and Stay Tuned

Become a Member if you want to be a part of the story.

Share Events and News you find interesting with us here! We will give it a shout on our new newsletter AIDA Forecaster!

For exciting updates and valuable insights, visit us at aidausergroup.org and on LinkedIn. Stay tuned for more!

Leave a Reply