Good morning from OWITH.ai: the podcast that gives you only what's important to hear in the AI and tech world.The conflict between the Pentagon and AI company Anthropic marks a crucial turning point in the ongoing debate over control and governance of AI technologies. This disagreement is rooted in Anthropic's refusal to allow the U.S. military to utilize its AI models for any lawful purpose, unlike OpenAI, which agreed to such terms. This divergence underscores varying industry stances on government collaboration. A significant issue is the rapid advancement of AI technologies outpacing democratic oversight mechanisms, such as legislation and congressional action. The absence of federal AI legislation complicates efforts to establish clear policies, highlighting the inadequacy of contracts between labs and the government as substitutes for democratic governance.Public distrust towards governmental intentions has been exacerbated by previous expansions of surveillance powers and opaque military activities, leading some to place more trust in technology executives over elected officials regarding AI policy setting. Historical precedents like the Manhattan Project indicate a trend of significant government direction in strategic technologies. The Pentagon's approach resembles soft nationalization, employing laws like the Defense Production Act to impose terms on companies like Anthropic, raising concerns about corporate rights.The situation is further complicated by potential repercussions for Anthropic for resisting Pentagon terms, such as being labeled a "supply chain risk," which could severely impact its business. OpenAI CEO Sam Altman advocates for close partnerships between governments and AI companies to navigate these challenges, although this stance is subject to scrutiny. Amidst these dynamics, concerns about AI's role in warfare are rising, as data centers become potential targets in conflicts. This scenario underscores AI's critical role in military strategies and highlights vulnerabilities in global infrastructure.In an extension of these issues, a standoff arose when Anthropic refused to allow its technology to be used for mass surveillance or lethal autonomous weapons. Consequently, the Pentagon terminated its $200 million contract with Anthropic, marking a significant moment in U.S. government-business relationships. As Anthropic faces this crisis, OpenAI has moved swiftly to replace it, securing a deal with the government to ensure technical, policy, and human controls over AI models. This situation emphasizes the intertwining of politics and AI amidst significant financial maneuvers in tech and investment sectors. Meanwhile, Anthropic's stance has led to further chaos as its Claude model overtakes ChatGPT in popularity on the Apple App Store but also experiences outages.OpenAI, led by CEO Sam Altman, has entered damage control mode following controversy surrounding its Pentagon contract. The agreement faced backlash due to concerns about surveillance and military use of AI technology. In response, Altman contacted Emil Michael, Undersecretary of Defense for Research and Engineering, amending contract terms to explicitly prohibit use on "commercially acquired" data. Despite these efforts, Altman admitted moving too quickly with the deal.In other developments, legal challenges have arisen regarding AI-generated art copyrights. The U.S. Supreme Court declined to review a case requiring "human authorship" for copyright eligibility, affecting Stephen Thaler's attempt for copyright protection for his AI-created image. This decision highlights ongoing challenges in recognizing AI-generated content within existing intellectual Support the show
Thanks for listening! Follow us on Twitter, Instagram and Linkedin