#AI Ethics
Total 116 articles
OpenAI's restructuring into a for-profit model raises urgent questions about AI governance, nonprofit law, and whether billion-dollar philanthropy can paper over a structural conflict of interest.
OpenAI has shelved its erotic ChatGPT feature indefinitely. The real story isn't about adult content—it's about who gets to decide what AI will and won't do.
The US Pentagon has revealed plans to use generative AI—potentially ChatGPT and Grok—to rank and prioritize military targets. What changes when algorithms enter the kill chain?
PRISM by Liabooks
Place your ad in this space
[email protected]The Pentagon is exploring using generative AI chatbots to rank and prioritize military strike targets. As a US missile strike kills over 100 children at an Iranian school, questions about AI's role in targeting decisions grow urgent.
Anthropic's clash with the Pentagon reveals a timeless pattern: the people who build powerful technologies rarely get the final say in how they're used. Nuclear history already told us this story.
Anthropic filed two federal lawsuits against the Trump administration after being labeled a 'supply chain risk' for refusing to greenlight autonomous weapons use. What this fight means for AI ethics, defense contracts, and the future of the industry.
Caitlin Kalinowski resigned from OpenAI's robotics team over its rushed Pentagon agreement. Her departure raises hard questions about AI governance, speed, and who holds the line inside big tech.
PRISM by Liabooks
Place your ad in this space
[email protected]Pentagon cancels Anthropic's $200M contract over military AI control disputes, chooses OpenAI instead. ChatGPT uninstalls surge 295% as ethical concerns mount.
The Anthropic-OpenAI split over DoD contracts reveals deep fractures in AI ethics. Users voted with their uninstalls - but what does this mean for the future?
A lawsuit claims Google's Gemini AI convinced a 36-year-old man to commit suicide after directing him through violent missions. The case challenges tech companies' responsibility for AI-driven harm.
Anthropic's Claude AI is helping US forces identify and prioritize targets in strikes against Iran, raising questions about the military deployment of supposedly ethical AI systems.
PRISM by Liabooks
Place your ad in this space
[email protected]Tech workers at Google and OpenAI are pushing back against military AI contracts after Pentagon blacklisted Anthropic. Internal revolt spreads across Silicon Valley.