Next event: AI
When: 4pm, Sunday the 27th of July
Where: The Caterpillar Club, 92 Pitt St, Sydney
Cost: by cash donation
Resources
For this topic I asked two different AI systems to “give me a list of three resources to best understand AI and the issues it creates” and then to summarise each into 3 bullet points. This is the result.
Perplexity
Artificial Intelligence: A Modern Approach by Stuart Russell & Peter Norvig
1. The Intelligent Agent Framework
The book centers on the concept of intelligent agents—systems that perceive their environment through sensors and act upon it using actuators, aiming to achieve goals rationally. This framework underpins the study of AI, guiding how agents should be designed to operate effectively in diverse and sometimes uncertain environments.
2. Core Techniques and Approaches in AI
Russell and Norvig systematically cover the major subfields and techniques of AI, including problem-solving, knowledge representation, reasoning, planning, machine learning, natural language processing, computer vision, and robotics. They emphasize formal models and algorithms as the foundation for building intelligent systems that can learn, adapt, and make decisions.
3. Four Approaches to Defining AI
The authors categorize AI research using two key distinctions: ‘thinking’ vs. ‘acting’ and ‘humanly’ vs. ‘rationally.’ This creates four approaches: thinking humanly, acting humanly, thinking rationally, and acting rationally, providing a comprehensive framework for understanding the full range of AI research and its goals.
The Alignment Problem: Machine Learning and Human Values by Brian Christian
1. The Alignment Problem Defined
The central challenge is ensuring that AI systems and machine learning models truly reflect and serve human values, rather than diverging from our intentions—sometimes in subtle, harmful, or unpredictable ways.
2. Real-World Consequences and Ethical Risks
Christian illustrates how misaligned AI already impacts society, from biased hiring algorithms to discriminatory criminal justice tools, and highlights the ethical and existential risks as these systems increasingly replace human judgment in critical decisions.
3. Technical and Philosophical Solutions
The book explores technical efforts (like reinforcement learning, inverse reinforcement learning, and transparency initiatives) and philosophical debates on how to encode human values into AI, emphasizing the ongoing, interdisciplinary struggle to align increasingly powerful systems with complex human goals.
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford
1. AI as Extraction and Exploitation
Crawford argues that AI is not just code and algorithms but a vast industrial system that extracts value from natural resources, human labor, and personal data, causing significant environmental damage and relying on the exploitation of marginalized workers.
2. Power, Inequality, and Colonial Logic
The book highlights how AI technologies reinforce and amplify existing power structures and inequalities, serving the interests of a few dominant tech companies and governments, often operating with a colonial logic of control and domination.
3. Challenging AI Myths and Demanding Accountability
Crawford challenges the myth of AI as neutral or purely technical, urging readers to recognize its deep entanglement with politics, culture, and capital. She calls for greater scrutiny, transparency, and justice in how AI is developed and deployed, emphasizing the need to address its societal and planetary costs.
Grok
“Artificial Intelligence: A Guide for Thinking Humans” by Melanie Mitchell
1. AI’s Capabilities and Limitations: Mitchell explains that while AI excels in narrow tasks like image recognition or game-playing (e.g., AlphaGo), it lacks general intelligence akin to human reasoning. Current systems rely on pattern recognition and vast data, not true understanding, leading to brittleness in unfamiliar scenarios.
2. Ethical and Societal Challenges: The book highlights AI’s potential to amplify biases (e.g., in hiring or policing algorithms) and disrupt labor markets. Mitchell emphasizes the need for transparency, accountability, and human oversight to mitigate risks like unintended consequences or loss of agency.
3. Human-Centric Perspective on AI: Mitchell advocates for a realistic view of AI, demystifying hype around “superintelligence” while urging readers to critically assess its impact. She stresses that AI’s development should prioritize human values, fostering collaboration between technologists and society
AI Ethics Course by the University of Helsinki (Free Online)
1. Understanding AI’s Ethical Implications: The course introduces core ethical issues in AI, such as fairness, privacy, and accountability, using real-world examples like biased algorithms in hiring or facial recognition. It equips learners to identify how AI systems can inadvertently harm individuals or groups.
2. Practical Frameworks for Ethical AI: It provides tools and frameworks to evaluate and address ethical challenges, including principles for transparent AI design and governance. Learners explore how to balance innovation with societal values, applicable to both developers and policymakers.
3. Interdisciplinary and Accessible Approach: Designed for beginners and experts, the course blends philosophy, technology, and policy perspectives. It uses case studies and interactive content to make complex issues like algorithmic discrimination or data misuse relatable and actionable.
The AI Now Institute Reports
1. Algorithmic Bias and Discrimination: The reports highlight how AI systems often perpetuate biases, such as in hiring, policing, or healthcare, due to skewed training data or flawed design. They document cases like discriminatory facial recognition and advocate for rigorous auditing to ensure fairness.
2. Labor and Economic Impacts: AI Now emphasizes AI’s disruption of labor markets, including gig economy exploitation and automation-driven job displacement. The reports call for policies to protect workers’ rights and address power imbalances between tech companies and employees.
3. Need for Robust Regulation: The institute critiques the lack of enforceable AI governance, noting risks like unchecked surveillance or opaque decision-making. They propose regulatory frameworks that prioritize public accountability, transparency, and community input to mitigate AI’s societal harms.
To make sure you don’t miss any events, subscribe to this Substack and join the Telegram group.
Beauty i will fo for sure. Glenn Greenwald reported this week that the Big Beautiful Bill contains billions for silicon valley AI to integrate with the military and discussed how Afghanistan was the testing ground for AI to score people on their mobile phone contact to enemies and social media, scoring people. When they got over a benchmark score they would be killed by drone. All automated and AI run. Of course they found out who scores the highest? Journalists who interview the enemy for their job and Government officials who have to negotiate with them.