﻿_id	episode_id	episode_name	episode_description	episode_release_date	episode_duration_ms	show_name	show_description	show_publisher	show_total_episodes
1	0S7ASrIuXeUmWBaO32w5yx	Google's AI Search Accuracy, Alaska's New Voting System, AI Regulation Debate with Garry Tan	Google's AI-generated search results are under fire for inaccuracy, raising concerns about their effect on web content and user experience. Alaska's innovative election system, with open primaries and ranked voting, has garnered national attention but faces opposition aiming to overturn it, sparking legal disputes and heated discussions on electoral reform. Garry Tan from Y Combinator pushes for regulation in AI advancement, endorsing the NIST GenAI risk mitigation framework and expressing worries about AI legislation in California, stressing the significance of ethical AI progress.	2024-05-27T00:00:00	930839	bishnu's News	Your personalized daily news podcast	bishnu's PocketPod	64
2	3njbbXA8nD1YtU4qRNd88n	Google Search Accuracy Concerns, Boeing Starliner Launch Update, AI Regulation Debate	Google is under fire for inaccurate AI-generated search results, sparking worries about its effects on web content and user satisfaction. Boeing aims for a June launch of Starliner after addressing issues like a propulsion system leak to ensure safe travel for test pilots to the ISS. Garry Tan, President of Y Combinator, calls for regulations in AI development, endorsing the NIST GenAI risk mitigation framework and expressing reservations about AI bills in California, stressing the significance of ethical advancements in AI technology.	2024-05-27T00:00:00	883249	Stoni's News	Your personalized daily news podcast	Stoni's PocketPod	52
3	6wekbvctM9tzJiACTmeGdK	Combatting Nonconsensual AI Images, Regulation Advocacy, and Responsible AI Governance	The Biden administration is urging tech companies to combat the rise of nonconsensual AI-generated explicit images, particularly targeting women and minors, through voluntary cooperation to disrupt monetization and ensure removal from online platforms. Garry Tan, President of Y Combinator, supports necessary regulation in AI development, endorsing the NIST GenAI risk mitigation framework and expressing concerns over proposed AI bills in California to promote responsible AI innovation. Miriam Vogel, CEO of EqualAI, advocates for responsible AI governance by emphasizing the importance of diversity in AI development and the implementation of standardized measures to evaluate AI systems.	2024-05-27T00:00:00	910572	Saravanan's News	Your personalized daily news podcast	Saravanan's PocketPod	58
4	1bDMrpxUdnoV0PbJDhpgS6	Google Search Accuracy Criticism, AI Governance Advocacy, Regulation in AI Development	Google is under fire for inaccurate AI-generated search results, sparking concerns about its effects on web content and user experience. Miriam Vogel, CEO of EqualAI, promotes responsible AI governance by addressing bias issues and stressing diversity in AI creation. Garry Tan, President of Y Combinator, also pushes for regulation in AI development and endorses the NIST GenAI risk mitigation framework while expressing worries about potential AI bills in California. Both Vogel and Tan advocate for responsible and standardized measures to ensure ethical AI innovation.	2024-05-27T00:00:00	892070	David's News	Your personalized daily news podcast	David's PocketPod	37
5	6gHp0c6LcaEtwHUtDU5POp	Google Search Accuracy Criticism, AI Regulation Advocacy by Tech Leaders	Google is criticized for inaccurate AI-generated search results, raising concerns about its impact on web content and user experience. Garry Tan, President of Y Combinator, supports necessary regulation in AI development, endorsing the NIST GenAI risk mitigation framework and expressing worries about proposed AI bills in California. Miriam Vogel, CEO of EqualAI, advocates for responsible AI governance by reducing bias, promoting diversity in AI development, and calling for standardized measures to evaluate AI systems. Both Tan and Vogel stress the importance of responsible AI innovation and ethical considerations in the field.	2024-05-27T00:00:00	886892	Jakob's News	Your personalized daily news podcast	Jakob's PocketPod	40
6	6z0W0oo5KhSBlFw7oki7qw	Combatting Nonconsensual AI Images, Biden's Environmental Regulations, Tech Giants' AI Investments	President Biden's administration is calling on tech companies to address the rise of nonconsensual AI-generated explicit images, particularly affecting women and minors, by voluntarily working to disrupt monetization and remove such content from online platforms. In a separate initiative, President Biden has introduced environmental regulations targeting greenhouse gas emissions and other issues to align with his policy objectives, facing pushback from industry and Republicans, which could lead to legal disputes. Additionally, concerns have been raised over major tech companies' significant investments in AI startups, prompting worries about their control and impact on the evolving technology sector.	2024-05-25T00:00:00	978098	Shivam's News	Your personalized daily news podcast	Shivam's PocketPod	57
7	3FzQLCDlUlaHhNg6nm1Wsf	AI Regulation Concerns, Techstars CEO Departure, Signal President's Critique	Major tech companies are investing significantly in AI startups, raising worries about their control and influence in the AI sector. Techstars' CEO Malle Gavet leaves amid controversy, with co-founder David Cohen taking over, while Linktree celebrates reaching 50 million users and experiencing growth in social commerce. Signal's President voices concerns at VivaTech regarding power concentration in the tech industry, EU regulations, and the impact of social media on disinformation and surveillance advertising.	2024-05-25T00:00:00	1000265	David's News	Your personalized daily news podcast	David's PocketPod	52
8	3An5VTPi8Ey5xYrm53CALl	AI Regulation Debate, Gen AI in Intelligence, Trump's Dangerous Claims	Major tech companies are investing significantly in AI startups, raising worries about their control and impact on the evolving tech scene. Simultaneously, generative AI is transforming U.S. intelligence operations by enhancing forecasting and analysis capabilities, despite lingering worries about privacy and security implications. Additionally, federal prosecutors have sought limitations on Trump's public remarks to prevent potential harm to law enforcement personnel linked to his classified documents case after he made unsubstantiated claims regarding FBI agents at his Mar-a-Lago property.	2024-05-25T00:00:00	921099	The Daily Dispatch	Your personalized daily news podcast	Sam's PocketPod	58
9	6aUnxFD1hWdcCUP965rhI4	Colorado Passes Landmark Law to Combat Bias in Artificial Intelligence	Colorado has become the first state to pass legislation aimed at addressing bias in artificial intelligence programs. The law requires companies using AI to assess the risk of discrimination and inform customers when AI is used in consequential decisions. It also mandates oversight programs and reporting of discrimination instances to the state attorney general. This development is significant as AI algorithms are increasingly used in decisions related to hiring, housing, and medical care, and can perpetuate discrimination due to biases in historical training data.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-25T00:00:00	202710	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
10	6omAt0KbBIMh3dvnjE7asH	Google AI Tool Under Fire for Providing Inaccurate Responses	"Google's new artificial intelligence tool is facing criticism for providing incorrect or misleading responses to user queries. Launched as an experimental feature, it generates an ""AI overview"" response based on web sources and is placed at the top of some search results. However, users have taken to social media to share examples of wrong or misleading results, including false information about elephants' feet and Barack Obama's religion. Google attributes the errors to uncommon queries and users trying to trip up the technology, but the controversy raises concerns about the reliability of artificial intelligence in providing accurate information.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-05-25T00:00:00	216346	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
11	0eE0dE6IBMyZ52NGUDkgcV	AI Regulation Challenges, Generative AI in Intelligence, Fighting High-Tech Election Interference	Major tech companies are investing significantly in AI startups, raising worries about their control and impact on the evolving tech sector, prompting regulatory concerns. Generative AI is transforming U.S. intelligence operations by enhancing predictions and analysis, despite ongoing worries about security and privacy implications, potentially reshaping national security practices. In a related context, the FCC has suggested a $6 million penalty for a scammer who misused voice-cloning tech to impersonate President Biden in unlawful robocalls, highlighting law enforcement's focus on preventing the use of generative AI in election interference.	2024-05-25T00:00:00	918914	Benjamin's News	Your personalized daily news podcast	Benjamin's PocketPod	21
12	4uus0KXyANoEGP0jo34otO	The Big Shift in AI Safety Discourse	A reading and discussion inspired by https://www.bloomberg.com/opinion/articles/2024-05-21/ai-safety-is-dead-and-chuck-schumer-faces-risks?srnd=undefined&sref=qUxVp6JU  ** Join Superintelligent at https://besuper.ai/ -- Practical, useful, hands on AI education through tutorials and step-by-step how-tos. Use code podcast for 50% off your first month! ** ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://aidailybrief.beehiiv.com/   Subscribe to The AI Breakdown on YouTube:https://www.youtube.com/@AIDailyBrief   Join the community:bit.ly/aibreakdown	2024-05-24T00:00:00	467121	The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis	A daily news analysis show on all things artificial intelligence. NLW looks at AI from multiple angles, from the explosion of creativity brought on by new tools like Midjourney and ChatGPT to the potential disruptions to work and industries as we know them to the great philosophical, ethical and practical questions of advanced general intelligence, alignment and x-risk.	Nathaniel Whittemore	385
13	3MTrSX1GQ36NKNOZZW8oJB	What are some realistic regulations that we can put on deepfake AI?	New York-based political consultant is facing $6 million in fines for sending deepfake robocalls to voters in New Hampshire. He says he did it on purpose to call attention to the issue before it gets out of control. Phil Napoli, Professor of Public Policy at Duke University, joins guest host Ian Hoch to talk about realistic regulations that can possibly be put on deepfake Ai technology.	2024-05-24T00:00:00	1018435	The Scoot Show with Scoot	Where politics and pop culture meet your opinions! 	Audacy	500
14	6NLeU5SkthjSWt9qcEPvua	AI Regulations, Nvidia's Earnings Surge, Contraception Debate Divides Parties	Executives from Amazon and Google highlighted the benefits of AI at the Viva Tech conference, amidst global regulatory efforts like the EU's AI Act. Nvidia's strong earnings were fueled by AI computing sales to tech giants, prompting investment in traditional companies embracing digitization and AI. The complex landscape of Republican lawmakers' views on contraception has become a focal point for Democrats seeking voter engagement during ongoing debates on reproductive rights.	2024-05-24T00:00:00	1032582	Tyler's News	Your personalized daily news podcast	Tyler's PocketPod	41
15	5WGcntplqhGNa93Y5LkMkS	AI Poses Significant Threat to Democratic Elections	A new federal assessment warns that artificial intelligence programs pose a significant threat to the democratic system of elections, particularly with the 2024 election cycle approaching. Next-generation technologies can be exploited to influence and sow discord in elections by creating or sharing altered or deepfaked pictures, videos, or audio clips. The Department of Homeland Security analysis outlines how generative AI tools can be used to confuse or overwhelm voters and election staff. Experts caution that authorities must be prepared to defend against AI-disseminated fake news and educate the public to counteract misinformation.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-20T00:00:00	257174	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
16	4J9NEgxZOp1P8JIjrsEPiG	AI-Generated News Anchors Used to Spread Disinformation and Propaganda	AI-generated news anchors are being used to spread disinformation and propaganda, with China at the forefront of this trend. A recent example in Taiwan featured a fake news anchor attacking the outgoing president, Tsai Ing-wen. Experts warn that as the technology becomes more accessible, AI-generated deepfake news anchors will continue to spread, potentially influencing elections and sowing confusion among voters.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-19T00:00:00	253048	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
17	5nEz42Lpyvot74gSq1v83Z	Tech Bytes  Week in Review: Google doubles down on AI, ChatGPT gets chatty and Congress charts a path for AI regulation	On this weeks Tech Bytes: Week in Review, Senate Majority Leader Chuck Schumer is calling for a heap of new spending on artificial intelligence research. Well look at where the proposed $32 billion annually is likely to go. And some of the biggest players in AI tried to outdo one another this week. OpenAI said its giving ChatGPT an upgrade and a personality while Google is trying to remake search with its AI model, Gemini. Marketplaces Lily Jamali spoke with Anita Ramaswamy, financial analysis columnist at The Information, for her take on these stories. Marketplace is currently tracking behind target for this budget year  that means listeners like you can make a critical difference by investing in our journalism today.	2024-05-17T00:00:00	669074	Marketplace Tech	Monday through Friday, Marketplace demystifies the digital economy in less than 10 minutes. We look past the hype and ask tough questions about an industry thats constantly changing.	Marketplace	150
18	6AlXCEZaKR4wHTzf4k7Mrw	How can we ensure that AI is aligned with human values? - RAPHAL MILLIRE	How can we ensure that AI is aligned with human values? What can AI teach us about human cognition and creativity?Dr. Raphal Millire is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, hallucinate in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-milliereI'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, hallucinate in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast	2024-05-17T00:00:00	3673823	AI & The Future of Humanity:  Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews	What are the dangers, risks, and opportunities of AI? What role can we play in designing the future we want to live in? With the rise of automation, what is the future of work? We talk to experts about the roles government, organizations, and individuals can play to make sure powerful technologies truly make the world a better placefor everyone. Conversations with futurists, philosophers, AI experts, scientists, humanists, activists, technologists, policymakers, engineers, science fiction authors, lawyers, designers, artists, among others. The interviews are hosted by founder and creative educator Mia Funk with the participation of students, universities, and collaborators from around the world.  	The Creative Process Original Series:  Artificial Intelligence, Technology, Innovation, Engineering, Robotics & Internet of Things	67
19	3YYOb0CzQMFkCxvyyWSzxC	States Lead Charge on AI Regulation	Colorado and Connecticut are pioneering efforts to regulate artificial intelligence, introducing groundbreaking bills to prevent AI-driven discrimination in crucial services like healthcare, employment, and housing. These bills come as over 40 states consider more than 400 AI-related bills, highlighting the challenges of governing this powerful technology. While Connecticut's effort stalled due to the governor's concerns about stifling innovation, Colorado's bill faces criticism from both consumer advocates and tech groups. As states navigate AI regulation, the need for safeguards against biased decision tools becomes increasingly clear.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-16T00:00:00	255503	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
20	5LbCUGnYPnwGcfLoRhViLb	US and China Engage in First Ever Dialogue on Artificial Intelligence	The US and China held their first-ever dialogue on artificial intelligence in Geneva, discussing the risks and management of AI. US officials expressed concerns about China's misuse of AI, while China rebuked the US over restrictions and pressure on the technology. Both sides recognized the opportunities and risks posed by AI, including its potential for digital surveillance and disinformation. The talks aim to keep communication open on AI risk and safety, a key aspect of managing competition between the two nations. The success of the talks will be measured by whether they continue in the future.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-16T00:00:00	197041	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
21	3S7wt3j7kKsYyBWhZp7n59	Foreign Actors Use AI to Influence US Election Process	"The US faces a significant threat to its electoral process from foreign actors using advanced artificial intelligence to influence American voters. Russia remains the most active foreign threat, aiming to erode trust in democratic institutions and exacerbate sociopolitical divisions. Other countries, including China and Iran, are also attempting to sway voters, making the threat landscape complex. The rise of AI has made it possible to create realistic ""deepfakes"" that target candidates and are difficult to attribute. Officials are working together to protect the electoral process, but concerns remain about combating disinformation and maintaining credibility.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-05-16T00:00:00	194115	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
22	2h9A0L5cV7cV6yXeOkSq8K	AI Call Scanning Tech Sparks Censorship Concerns	Google's demonstration of its call-scanning AI technology has sparked concerns among privacy and security experts, who warn that it could pave the way for centralized censorship and surveillance. The technology, powered by Gemini Nano, scans voice calls in real-time for conversational patterns associated with financial scams, but experts say it could be repurposed for social surveillance, monitoring, and blocking of certain activities. They caution that this could lead to a dystopian future of censorship by default, threatening basic values and freedoms.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-16T00:00:00	230426	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
23	5uDTH8LpquzmrD72sBaeYF	Americans Fear AI Will Distort 2024 Election by Manipulating Voters	A recent survey has found that 78% of Americans are concerned that artificial intelligence will be used to manipulate or distort voters' perceptions of candidates in the 2024 election. They worry that AI will create fake social media accounts, videos, and targeted content to discourage certain demographics from voting. Most respondents believe AI will be used to generate fake accounts and drive conversations on social media, and only 5% think AI will have a positive effect on political discourse. Despite government efforts to address concerns, Americans are pessimistic about AI's influence on the election.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-16T00:00:00	189100	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
24	3NK9oNjdNZz1l39J9mdexB	AI Safety and Regulation	"Mark Collier (@sparkycollier, COO @openInfradev) talks about the advantages of open source AI and the intersection of OSS and AI transparency, safety, and potential regulations.SHOW: 821SHOW TRANSCRIPT: The Cloudcast #821CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwNEW TO CLOUD? CHECK OUT OUR OTHER PODCAST - ""CLOUDCAST BASICS""SPONSOR:See what graphs can do for you at Neo4j.com/developerSHOW NOTES:Marks Talk at ATOEU AI Act PassesHow Tech Giants Cut Corners Harvest DataThe EU Guide Act - A Guide for DevelopersOpenInfra FoundationTopic 1 - Our topic for today is AI Safety and Regulation. I saw our guest speak at All Things Open here in Raleigh late last year and he is also a Cloudcast alumnus having been on the show previously talking about OpenStack and the OpenInfra Foundation. Wed like to welcome Mark Collier (Chief Operating Officer @ OpenInfra Foundation) for this discussion. Mark, welcome to the show.Topic 2 - Theres a lot of news today about AI safety and regulation. The industry also seems to be caught up in an AI arms race of who has the bigger model, faster model, etc. OpenAI have become the early category leader but they might have started with good intentions, but, contrary to their name, they arent open at all. One message in your talk is how open-source software will prevent the coming of the AI overlords. Tell everyone a bit of what you mean by this. What is the problem we are facing and many may not even realize it.Topic 3 - I dont want to call you old (I think we are about the same age), but youve seen some things. Youve also been around OSS and foundations for a bit now. How can open source solve the problem?Topic 4 - We hear a lot about AI regulation, but this seems to be a moving target. What is both the current and future state of AI regulation? In my opinion, we havent seen a lot of successful regulations to date. We saw recently the EU pass an AI Act. Is this the first of many? The start of a trend?Topic 5 - Lets talk about the day job. Whats new with OpenInfra Foundation these days?Topic 6 - OpenStack releases are still going strong and youve even run out of letters on OpenStack releases and have rolled around on the alphabet and are back to C. This is the 29th release of OpenStack. Whats the news for the Caracal release?FEEDBACK?Email: show at the cloudcast dot netTwitter: @cloudcastpodInstagram: @cloudcastpodTikTok: @cloudcastpod"	2024-05-15T00:00:00	1866928	The Cloudcast	The Cloudcast (@cloudcastpod) is the industry's #1 Cloud Computing podcast, and the place where Cloud meets AI. Co-hosts Aaron Delp (@aarondelp) & Brian Gracely (@bgracely) speak with technology and business leaders that are shaping the future of business. Topics will include Cloud Computing | AI | AGI | ChatGPT | Open Source | AWS | Azure | GCP | Platform Engineering | DevOps | Big Data | ML | Security | Kubernetes | AppDev | SaaS | PaaS .	Massive Studios	862
25	1QOwl56nEMmg72UneDmUqQ	AI Revolution Brings Both Opportunities and Risks to Local News Industry	The news industry has faced numerous revolutions, from the printing press to the internet, which disrupted the local news business model as readers opted for free online news, causing print circulation and ad revenue to plummet. By 2024, the US is predicted to lose a third of its newspapers and almost two-thirds of its journalists, posing a threat to democracy and creating news deserts where misinformation spreads. To combat this, philanthropists and others are investing in local news. The latest revolution, generative AI, brings both opportunities and risks, including improved efficiency and the potential to replace journalists, and the industry must harness it wisely to create a sustainable future.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-15T00:00:00	224940	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
26	3TBrJbAiITRSGhrJONrVvK	DOJ Cracks Down on AI-Driven Election Interference	"The US Department of Justice is cracking down on election interference, including the use of artificial intelligence to manipulate voters. Deputy Attorney General Lisa Monaco vowed to prosecute those who threaten or harm election workers, citing a rising trend of perpetrators using new technologies to conceal their identities. The department is working to combat AI-generated efforts to influence voters ahead of the November presidential election, with tech companies also joining the fight against ""deep fake"" misinformation.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-05-15T00:00:00	224679	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
27	587CjfLJA3sEgD4E1B3mL6	Tech Companies and Investors Rally Against Strict AI Safety Rules, Shift Focus to China	The tech lobbying landscape in Washington D.C. is experiencing a shift as companies and investors rally against strict safety rules on advanced artificial intelligence (AI) and instead advocate for focusing on the threat posed by China. This pro-tech, anti-China AI push has successfully won over skeptical members of Congress, thanks to influential players like IBM and Meta, as well as other companies and investors including Nvidia, AI startups, venture capital firm Andreessen Horowitz, and libertarian billionaire Charles Koch. This lobbying battle pits those warning about the risks of AI against those highlighting its transformative potential, with implications for AI policy in the US and the global industry.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-14T00:00:00	232933	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
28	7cYaol8o0bnQZYvAzrfpM6	AI-Generated Spam on the Rise: Platforms Struggle to Tackle Misleading Content	AI-generated spam is on the rise on social media platforms like Facebook, Threads, and LinkedIn. Users are being bombarded with AI-generated images and posts that have misleading captions and emotionally exploitative content. Surprisingly, these posts are being suggested by the platforms themselves, leading users to believe they are genuine. Georgetown and Stanford researchers found that many of these AI-generated posts are involved in scams and spam, with unclear financial motivations. Users are frustrated by the prevalence of spam and fake content, eroding trust in the digital world. To address the issue, Meta and TikTok are planning to label AI-generated content to help users differentiate between genuine and AI-generated posts.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-14T00:00:00	232097	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
29	22IODl8mmDaHct6Jv9vWE4	10: Inside an AI company's copyright defence, and how finance firms can cope with AI regulation	 Cerys Wynn-Davies uses a court filing to analyse how AI companies are defending themselves against huge copyright infringement claims, and Luke Scanlon sets out the steps finance firms need to take to stay on the right side of growing finance-specific AI regulation, ahead of delivering training for financial services senior managers.  Never miss a story, sign up for business law updates.	2024-05-14T00:00:00	1240084	The Pinsent Masons Podcast	Lawyers from international law firm Pinsent Masons discuss the latest news in the world of business law. We analyse rulings, laws, news events and trends to help organisations navigate a complicated and fast-moving world of business law and regulations. Every fortnight in these 20 minute episodes we give expert guidance to keep you ahead of your competition and to help you meet the challenges ahead. Listen and subscribe for the latest news and analysis on legal and regulatory issues from expert voices at a leading firm.	Pinsent Masons	11
30	5aWNumYkc84nP9BK1gFR2s	OECD AI Principles	This document from the OECD is split into two sections: principles for responsible stewardship of trustworthy AI & national policies and international co-operation for trustworthy AI. 43 governments around the world have agreed to adhere to the document. While originally written in 2019, updates were made in 2024 which are reflected in this version.Original text:https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 Author(s):The Organization for Economic Cooperation and DevelopmentA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.	2024-05-13T00:00:00	1414217	AI Safety Fundamentals: Governance	Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance	BlueDot Impact	60
31	25OB1j96vkOxijuW5xRMIE	FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence	This fact sheet from The White House summarizes President Biden's AI Executive Order from October 2023. The President's AI EO represents the most aggressive approach to date from the US executive branch on AI policy.Original text: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ Author(s):The White HouseA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.	2024-05-13T00:00:00	855588	AI Safety Fundamentals: Governance	Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance	BlueDot Impact	60
32	17NpbKZ7MH0joRwU2RSTT4	Russian AI Disinformation, Microsoft Email DDoS AT&T, IoT EU Device Regulations	In today's episode, we delve into the findings of a recent investigation conducted by Insikt Group on an influence network known as CopyCop, likely operated from Russia and aligned with the Russian government. This network extensively employs generative AI to create and disseminate political content aimed at specific audiences, focusing on divisive issues and undermining Western governments. The episode also highlights the challenges posed by CopyCop's AI-generated disinformation content and the broader implications on election defense strategies and the risks posed to media organizations. Check out the detailed technical analysis and insightful recommendations shared in the episode links: Recorded Future Analysis, AT&T Microsoft 365 Delay, and IoT Device Security Regulations.    00:00Intro    01:02 Unveiling CopyCop: Russia's AI-Driven Disinformation Campaign    03:43The Spam Wave: AT&T and Microsoft 365's Email Blockade    05:51The IoT Security Challenge: Navigating New Regulations    Search Phrases:     AI-generated disinformation threats    Addressing CopyCop network disinformation    Protecting content against AI plagiarism    Impact of Russian-operated networks on disinformation    AT&T email delivery delay issues    Microsoft 365 email spam wave    Gmail service disruption due to spam    IoT security regulations compliance    Preventing vulnerabilities in IoT devices    Exploitation in connected products due to security flaws     A Network operated by the Russian government called CopyCop is using generative AI to plagiarize and disseminate divisive political content targeting Western audiences.    Raising concerns about AI generated disinformation and amplification by known Russian influenced actors in this the year of our election. How can private media organizations    Protect their content and reputation against this growing trend.    AT& T's email servers are currently blocking Microsoft 365 due to a spam wave, causing significant delays in email delivery.    Who knew that spam could DDoS your email service?    And finally, IoT device manufacturers are facing increased pressure to improve security measures in compliance with new regulation standards in order to prevent exploitation and potential dangers stemming from the vulnerabilities in these connected products.    You're listening to The Daily Decrypt.    Alright, well, you officially heard it here first, folks. Russia is meddling in our election. I know you all are surprised and you've never heard such an outrageous claim before, but it's true. And now with the    use of large language models like OpenAI,    they can do a whole lot of damage, particularly in the realm of disinformation and divisive talk, so trying to get us to turn against each other. And they can do this automatically, using code, to grab articles from Reputable news sources and repost them by injecting AI generated content    to try to sway the results of the election.    So coming to you from recorded future, CopyCop utilizes generative AI to plagiarize and translate content from mainstream media outlets to create biased narratives, targeting specific audiences in the United States, the UK, and France, focusing on divisive domestic issues and supporting pro Russian viewpoints. The network is connected to disinformation outlet DC Weekly and Russian state sponsored influence actors, amplifying content to undermine Western policies and create distrust between these governments.    The network has expanded to operate a self hosted video sharing platform and a forum named Exposedum. Indicating growing ambitions AI generated content with truly human produced content. Making it even harder to spot the fake stuff.    So there is plenty of purely AI generated content out there.    But that's not the most effective way to spread disinformation. The most effective way to spread disinformation is to take factual articles written by legitimate sources and change them a little bit.    To help spread the message you want to spread so Russia is doing just that they're taking things that you're reading and you're like Oh, yeah, that's true. I know that to be true. I know that to be true That makes sense And then you're more likely to believe the little lies they slip in there    And so if you've listened to this podcast before you know my take on this but look at everything Skeptically every piece of information you read try to think about it in a way that it could be lying to you You don't necessarily have to believe that it's lying to you, but look at it as if it was, and what damage that would cause.    Who would benefit from that lie?    And at the very least, How could this be an over exaggeration of the truth?    And hey, once you master that skill, give me a call. We'll probably be best friends. That's I'm working on that really hard, but it will only go to benefit you pretty much everywhere in your life, except for around the table at Thanksgiving, your parents and your aunts and uncles are going to hate you for questioning everything they say,    so for pretty much this whole week, AT& T is experiencing delays in delivering emails from Microsoft 365. which is Microsoft's cloud service, due to a surge in spam originating from Microsoft's So customers have reported being unable to receive emails from Microsoft 365 addresses, specifically impacting those trying to email at att.    com, at sbcglobal. net, or at bellsouth. com. AT& T servers were refusing connections from Microsoft 365 because of a high volume of spam emails coming from their servers. So all that means is that someone who is using Microsoft's cloud service is sending out tons of spam email to AT& T. Thus, AT& T has blocked everything from Microsoft 365.    Which is a pretty big detriment to those who use AT& T for their email. because now they can't send emails or receive emails from anyone in Microsoft 365 cloud.    Now, Microsoft has addressed this with plans to combat spam by implementing a daily exchange online bulk email limit of 2, 000 external recipients, but that's not starting until January of 2025.    And I'm sure that plan is going to have to be tweaked because    the company that I work for has more than 2, 000 email recipients. And like, how is that going to be affected? Maybe I guess you could email internal, but not external. I'm not sure. That's a pretty low number for people to email each day.    But at the same time, I am also really surprised to see that Microsoft doesn't have any external like sending limits, rate limits for its users that should at least be set at a threshold that doesn't shut down all of AT& T, maybe a little more than 2000, but probably less than what it's doing now. But, the point of the story is that apparently spam can DDoS your email service, and if you use AT& T, specifically one of those three domains, that might explain why you've been missing emails or unable to send to certain individuals.    And finally, IoT devices are coming under more and more scrutiny as they tend to be the gateway for spam. different types of attacks. They're really easy to attack generally because they're cheap and they're unsupported. So whatever connectivity devices they have tend to become vulnerable and then they're never patched.    So attackers can google what device they see in your network. And Google will literally return what they can do to infiltrate that device. And then once an attacker has infiltrated an IoT device, and in case you're not familiar with what IoT is, it stands for Internet of Things, and it's just the devices you get that are pretty cheap that connect to the internet.    So if you have any children's Wi Fi    It can range from those all the way to dishwashers, to fridges, to cameras, remote control devices, etc. That's what's called IoT.    And once an attacker gets into an IoT, that IoT is in your network, and it can be used to pivot to other more critical assets, like your main computer, or your server that hosts all your documents, medical documents, etc.    It's about time IOT came under the scrutiny.    And the article linked in the show notes by HelpNet Security outlines some historical timelines of how IOT devices are being more secured, such as in 2022,    NIST surveyed the state of IOT security and made a series of recommendations.    But most recently, the European Union has drafted what's called the Cyber Resiliency Act and is set to begin rolling out in 2025. which will create new cybersecurity requirements to sell a device in the single market.    And a lot of devices that are sold in the European Union are also sold around the world, the United States, etc. So it's going to have to abide by these regulations. Now, I wish that the country I resided in would start making these types of regulations, but at least we can piggyback off of the great things that they're doing for data security in the European Union.    This has been the Daily Decrypt. If you found your key to unlocking the digital domain, show your support with a rating on Spotify or Apple Podcasts. It truly helps us stand at the frontier of cyber news. Don't forget to connect on Instagram or catch our episodes on YouTube. Until next time, keep your data safe and your curiosity alive.	2024-05-10T00:00:00	509648	The Daily Decrypt	The Daily Decrypt, hosted by offsetkeyz and d0gesp4n, offers an insightful and approachable take on cybersecurity. Their discussions cover a range of topics, from specific software vulnerabilities to broader issues like mobile security and ransomware trends. They delve into technical details while maintaining accessibility for a general audience, emphasizing practical advice and current developments in the cybersecurity field. The podcast strikes a balance between in-depth analysis and user-friendly content, with a focus on high-quality audio and production.	The Digital Security Collective	10
33	3zQ5oSGHd7a3vTjtq3b1Co	AI-Generated Controversy, Synthetic Data Solutions, Dog Import Regulations	OpenAI is contemplating allowing users to create sexually explicit content using AI, leading to discussions on the potential risks and benefits. Fairgen, an Israeli startup, utilizes 'statistical AI' to produce synthetic data for market research, aiding companies in overcoming data scarcity and financial limitations. The platform aims to enhance smaller datasets for more detailed insights and has secured $8 million in funding. Additionally, new government regulations require dogs entering the US to be at least 6 months old, vaccinated for rabies, microchipped, and complete a CDC import form starting August 1, 2022, to prevent the spread of the virus amid increased international pet travel and fake vaccination certificates.	2024-05-10T00:00:00	981250	Joshua's News	Your personalized daily news podcast	Joshua's PocketPod	54
34	5HfuRAdAMUeED1rDX4lpt3	Biden administration takes action to safeguard advanced AI from foreign threats	The Biden administration is taking steps to protect advanced AI models from being exploited by China and Russia. These models have the ability to analyze large amounts of text and images, but there are concerns that U.S. adversaries could use them for cyber attacks or to create biological weapons. One of the main threats is the creation of deepfakes and the spread of misinformation. Companies like Open AI and Microsoft have developed AI-powered tools that can be used to create convincing deepfakes and other misleading content. Another concern is the potential use of AI models to create biological weapons, and the use of AI in cyber attacks. To address these threats, a bipartisan group of lawmakers has introduced a bill to impose export controls on AI models and give the Commerce Department more authority.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-10T00:00:00	300225	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
35	7L6TfGUzPuzO3yFN9gotpJ	AI Ethics and Regulation Unfold	Ethical AI dilemmas and responsible creationEU's risk-based AI regulatory frameworkInsights from AI & Big Data Expo expertsSoftServe's commitment to ethical AI solutions          .done-button-podcast {     display: inline-flex;     align-items: center;     margin: 4px 0 0;     padding: 4px 8px;     background: #622bff;     border: none;     border-radius: 8px;     color: white;     text-decoration: none;     }           How was this episode?      Overall       Good     Average     Bad     Engaging        Good     Average     Bad     Accurate        Good     Average     Bad     Tone       Good     Average     Bad     Sourcehttps://www.artificialintelligence-news.com/TranscriptAs the dawn of an advanced artificial intelligence age unfolds, the world grapples with a cavalcade of ethical concerns and regulatory challenges. Artificial intelligence, much like a rushing river, shapes the landscape with its relentless force, catalyzing transformations that touch every facet of human existence. Guiding this river, ensuring it nourishes rather than devastates, are the ethical frameworks and regulations that must evolve as swiftly as the technology itself.   The ethical landscape of AI advancements is complex. It's not simply about what AI can do, but rather what it should do. This dialogue, thriving at the intersection of morality and innovation, discerns the mantle of responsibility that developers and users must bear. Resonating through the corridors of the upcoming AI & Big Data Expo North America are the voices of industry leaders like Igor Jablokov, who underscore the ethical dilemmas spurred by the swift development of AI. They draw attention to the paramount importance of responsible creation and implementation of AI technologies, advocating for a future where risks are mitigated as the potential of AI is harnessed.  Echoing these sentiments is SoftServe's dedication to pioneering AI solutions that do not lose sight of ethical deployment. Chuck Ros from SoftServe imparts a vision where the rapid strides in AI must align with strategic steps that uphold ethical standards and responsible introduction into society. It's a balance of leveraging power while tethering it with a strong moral compass, a sentiment set to reverberate through the presentations and discussions at the Expo.  This calls to the fore the legislative blueprint provided by the European Uniona risk-based regulatory framework that is meticulously dissected and deliberated upon within the AI community. It partitions AI systems into strata of risk: the unacceptable, high-risk, transparent, and minimal categories. Each tier dictates the intensity of the oversight and restrictions required, from outright bans to light-touch regulations, all designed to uphold EU values and protect fundamental rights.   The EU AI Act becomes a lodestar for how other regions could navigate the treacherous waters of AI regulation. It suggests a measured approach, one where rules are sculpted to the specific risks presented by AI systems. The Act's precedence underscores an emergent global truth: AI's ethical and regulatory journey is indispensable and must be undertaken with judiciousness and foresight. It's a journey of constant calibration, where innovation must go hand in hand with protection against AI-related harms, striving for transparency and trust in the digital companions increasingly integral to daily life.  Thus, with the backdrop of an ever-advancing technological landscape and the framework for rigorous, responsible stewardship, the stage is set for a deeper exploration into the ethical dimensions of AI innovation, driven by insights from experts at the vanguard of this transformative epoch. The ethical dimensions of AI innovation are as multifaceted as the technology itself, and nowhere are these complexities more acutely examined than in the astute perspectives shared by Igor Jablokov, CEO and founder of a pioneering AI company. Jablokov stands at the vanguard of AI development, yet he remains acutely aware of the weighty ethical dilemmas that accompany the rapid progression of this technology. Amid the fervor to push the boundaries of what machines can achieve, lies a network of moral quandaries that must be navigated with care and profound consideration.  Artificial intelligence has the capability to reshape the world, but its creation and implementation carry implications that extend far beyond technical feats. It is the duality of AI's vast potential for benefit against the backdrop of possible risks that calls for a nuanced approach to innovation. As Jablokov articulates, the key to harnessing AI's transformative power is to temper it with responsibility at every stagefrom the drawing board to the user interface and beyond.   AI technologies are not forged in a vacuum; they are the product of human ingenuity and, as such, are imbued with the biases, strengths, and weaknesses of their creators. It is this intimate connection between humanity and its creations that necessitates a framework for responsible innovation. Such a framework must address not only the typical concerns of safety and security but also the subtler issues of fairness, transparency, and accountability.   Jablokov's discourse highlights a need for clear ethical guidelines that go hand in hand with the relentless pursuit of advancement. There is an understanding that each stride forward in AI must come with a vigilant assessment of implicationsa balancing act between what can be done and what should be done in the context of societal norms and moral imperatives. This calls for a dynamic interchange between technology and the ethical principles that should govern ita reminder that the latter must not be outpaced by the former.   The responsibility for ethical AI extends beyond developers and directly into the hands of organizations and individuals tasked with deploying these systems. The interplay between creation and utilization defines the trajectory of AI's impact on society. It becomes evident, through Jablokov's insights, that the path to ethically sound AI is paved with deliberate choices and informed governance.  As the technological realm eagerly anticipates the further revelations and strategies Jablokov and other experts will share at the AI & Big Data Expo North America, it becomes increasingly clear that the future of AI is not solely in the code and algorithms but equally in the values and ethics that are interwoven with every line. The endeavor now is to stride forward, not just with the intention to innovate, but also with the commitment to preserving and enhancing the fabric of society through technology that is both powerful and principled. Understanding the critical role ethics play in AI innovation naturally leads into exploring how organizations like SoftServe implement frameworks that ensure such responsibility. The journey to ethically sound AI often starts deep within the operational ethos of companies that design and deploy these systems. At the helm of such initiatives within SoftServe, is Chuck Ros, the Industry Success Director, instrumental in shaping the company's approach towards responsible AI development.  SoftServe confronts the myriad challenges that arise in the realm of artificial intelligence with a strategy that prioritizes not just the technological advancement but the ethical implications thereof. As AI systems become more integrated into the fabric of everyday life, they blur the lines between machine autonomy and human oversight. Companies like SoftServe forge ahead, keenly aware that with great power comes great responsibility.   In detailing the company's approach, Ros underscores the need to construct AI that adheres to ethical standards from inception to deployment. This includes a rigorous process of vetting for biases within algorithms and ensuring that data governance not only complies with regulatory standards but also aligns with moral principles that uphold human dignity and rights. It is about embedding ethical considerations into the DNA of AI systemsa proactive stance that seeks to mitigate risks before they manifest.  The challenges are not inconsequential. AI is a potent tool, capable of wielding influence across diverse sectors, from healthcare to finance to law enforcement. With such a broad-spectrum impact, ensuring responsible deployment is an intricate task. SoftServe recognizes that building trust is paramount, which can only be achieved through transparency and accountability. To this end, the strategic steps taken include establishing clear lines of communication about how AI operates, what decisions it is making, and on what basis these decisions are made.  In a landscape that is as fast-evolving as AI, SoftServe acknowledges that staying ahead of the curve is integral. This involves ongoing education, research, and collaboration with experts across fields to harmonize the progression of technology with the evolution of ethical standards. As AI systems grow in complexity, so too must the frameworks that govern them, ensuring that they remain robust, adaptable, and above all, aligned with the core values of society.  Ros's vision for SoftServe is one where technology serves humanity, and ethical diligence serves as the guardrails for this journey. As the AI & Big Data Expo North America approaches, it's this commitment to responsible AI that is poised to resonate through the panel discussions and presentations. The discussion throws light on the role of organizations like SoftServe in charting a course where AI fulfills its promise as a force for gooda goal achieved not by chance but through deliberate and conscientious strategy. In the pursuit of responsible and ethical AI, the discussion surges forward to the regulatory frameworks that govern AI systems, with particular attention to the pioneering work of the European Union in this domain. The EU AI Act stands as an instructive model, initiating a risk-based approach to AI regulation that can serve as a benchmark for other regions seeking to balance the act of fostering innovation against the need for protection from potential AI-related harms.   Articulated within the Act are distinct risk levels that AI systems may pose. The framework begins with systems deemed as presenting 'unacceptable risk'those that fundamentally contravene the values of the EU, including systems that could infringe upon personal freedoms or discriminate in harmful ways. The use of AI in this category is clear: they are prohibited due to their potential to cause significant detriment to individual rights and societal norms.  Ascending from unacceptable risk, high-risk AI systems invite closer scrutiny. These are the systems that integrate deeply into society's critical infrastructure, in areas as varied as healthcare, education, and law enforcement. The Act mandates rigorous compliance for high-risk AI, a requirement that underscores the gravity of ensuring these systems are trustworthy. From stringent data governance practices to detailed documentation, these measures not only safeguard the public but also foster a culture of accountability among developers and users of AI.  Yet, it's not just high-risk applications that demand regulatory attention. Systems falling into the 'transparency risk' category, though not typically associated with high-risk outcomes, still require clear labeling and disclosure. With the pervasive nature of AI, ensuring that individuals are informed and aware that they are interacting with AI rather than human intelligence is vital for maintaining an environment of trust and informed consent.  Finally, the broadest category encompasses AI applications that present minimal or no risk. These are the systems where the potential for harm is negligible. This categorization is crucial as it refrains from stifling innovation through overregulation. It acknowledges that while AI requires oversight, the approach must not be monolithic but instead reflective of the diverse array of applications and their associated impacts.  This segmentation of AI systems into risk categories highlights the ethos of the EU AI Acta deep understanding that while AI is a singular field, its applications are not. Different systems demand varied levels of oversight to ensure protections against misuse without hampering the progressive nature of technological advancement.  The Act serves as a clarion call, one that emphasizes the importance of setting high standards for systems that could significantly impact individuals and societies. By enforcing transparency and ensuring compliance with ethical standards, the EU endeavors to carve a path where innovation and ethical responsibility are not conflicting ideals but components of a unified strategy towards a future where AI can realize its potential responsibly and beneficially. With this, the anticipation grows for the approaching discussions that will undeniably delve deeper into how such regulations might inspire and shape global policies for AI.          .done-button-podcast {     display: inline-block;     margin: 18px 0 0;     padding: 0 8px;     background: #622bff;     border: none;     border-radius: 8px;     color: white;     }          Get your podcast on AnyTopic     	2024-05-08T00:00:00	824258	AnyTopic General Podcast	All AnyTopic Podcast's	AnyTopic	7366
36	25FTQfgBWN3WZx3AEtZy14	Episode 4: In-House Perspectives on AI Policy & Regulation	OMelvenys podcast Achieve With... features the countrys top lawyers in conversation with industry thought leaders on the defining issues of the day. On the latest episode, Tony Beasley, Senior Corporate Counsel at Microsoft and Jonathan Crawford, Managing Counsel at Adobe, join Mark Liang, a partner in OMelvenys Intellectual Property & Technology group and San Francisco office, to discuss their experience advising companies on artificial intelligence, emerging legal and regulatory developments, and predictions for the future of AI policy.	2024-05-07T00:00:00	2270171	Achieve With	OMelvenys Achieve With podcast features leaders in business and law sharing insights on the critical issues of the day and how they impact your world.	O'Melveny & Myers LLP	4
37	6GjtZ550OHusOTjjXTp3DY	Warren Buffett Warns of AI Risks at Berkshire Hathaway Meeting	Warren Buffett, chairman and CEO of Berkshire Hathaway, has expressed concerns about artificial intelligence (AI) at the company's annual shareholder meeting. Buffett likened AI to the genie that came out of the bottle with nuclear weapons, saying he sees both enormous potential for good and harm with AI. He warned of the increasing prevalence of deepfake scams and revealed that his own image and voice have been replicated by an AI-backed tool. Despite his concerns, Berkshire Hathaway is using AI to increase employee efficiency. Other prominent figures, including JPMorgan Chase CEO Jamie Dimon, have also expressed concerns about the risks associated with AI.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-07T00:00:00	284212	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
38	4GH68SbTpKdCYAr9JggkfO	Americans Support Regulations on AI Data Training Practices	An AIPI poll found that a majority of Americans support regulations on AI companies' data training practices. The survey of 1,039 people revealed that 60% believe AI companies should not have unrestricted access to public data for training their products, and nearly three-quarters agreed that companies should compensate data creators. The poll also showed that 78% of respondents think regulations are needed for using public data to train AI models. The survey results reflect a growing recognition of the need for regulation in this area due to the rise of generative AI. The poll also explored the demographic breakdown, personal attitudes towards AI, and the relationship between education and attitudes. In related news, concerns have been raised about the overlap between AI developments in the US and China. China has introduced methods to prevent pre-trained models from being used for inappropriate tasks, but the definition of appropriateness may vary between countries. Microsoft's AI ambitions are also under scrutiny from global regulators who are concerned about the concentration of power and may investigate AI  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-06T00:00:00	174706	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
39	0S5gkYsp25LyY1HwQCjSYE	AI Integration in Businesses: Key Risks and Strategies	As artificial intelligence (AI) becomes more integrated into businesses, there are various risks that need to be managed. These risks include regulatory compliance, cybersecurity vulnerabilities, ethical considerations, and privacy concerns. Companies and directors must establish effective risk management strategies to avoid potential consequences. Introducing AI can lead to liabilities if not prepared for regulatory scrutiny or claims activity. Misrepresentation of AI capabilities and investments can result in legal challenges. Directors and officers are responsible for overseeing AI integration and understanding the associated risks. Transparent communication, training employees, and establishing decision-making protocols are crucial for managing AI-related risks and harnessing its potential while minimizing risk.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-06T00:00:00	248685	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
40	4qmeMPNrTvEE9IvScH3HBX	Concerns rise over deepfake technology's harm towards women	Deepfake technology has raised concerns about its potential harm towards women, as manipulated videos or images created using AI can make it appear as though someone is doing or saying something they never did. The accessibility of deepfakes has led to instances of non-consensual nude images being shared, particularly involving underage girls. While Quebec has prosecuted individuals involved in the abuse of deepfake technology, Canada as a whole is unprepared to address these challenges. Experts emphasize the need for greater diversity and transparency in AI development and regulation to protect women. Efforts are underway to bridge the gender gap in AI and raise awareness about the ethical implications of deepfakes.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-06T00:00:00	201168	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
41	6s71JrLcWSuLtTadZCFqnX	Warren Buffett Warns About AI Scams	Warren Buffett, CEO of Berkshire Hathaway, has cautioned shareholders about the potential risks posed by artificial intelligence (AI) scams. Speaking at the annual meeting, Buffett shared an experience with a fake AI video that could have resulted in him being scammed. He expressed concerns that scammers could exploit AI technology on an unprecedented scale, causing harm that society may struggle to combat. Buffett admitted to not fully understanding the implications of AI but emphasized the importance of addressing the issue. The annual meeting also covered Berkshire Hathaway's financial results, with the company reporting a drop in earnings but highlighting the growth of its operating earnings. Additionally, shareholders were introduced to potential successors for Buffett's role as CEO. Overall, Buffett's warning highlights the need for caution and vigilance in the face of emerging technologies like AI.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-06T00:00:00	209397	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
42	66tMEImtFjDy0kkTPwhloo	Democrats gear up to harness AI for political campaigns, balancing benefits and risks	Democrats are working to catch up to Republicans in utilizing artificial intelligence (AI) technology for political campaigns. They aim to find and motivate voters, combat deceptive content, and analyze voter registration records. However, they are cautious about the risks AI poses in spreading falsehoods that could suppress voters or incite violence. The Biden administration has taken steps to regulate AI through executive action, but Democrats believe Congress needs to pass legislation for better safeguards. Progressive groups and Democratic candidates have been more proactive in experimenting with AI, but Democrats are focused on addressing ethical concerns and preventing misuse.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-06T00:00:00	221884	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
43	6h2InkFEe002hCvehkk6HO	Hollywood's Secret Use of AI Raises Concerns About Transparency	The use of generative AI in Hollywood has sparked concerns about transparency in the entertainment industry. Film and TV makers are not disclosing their use of AI, blurring the line between reality and fiction. Media companies like Netflix and HBO have been caught using AI without disclosing it, leading to skepticism among viewers. Fans of documentaries argue that AI-generated images introduce inaccuracies into the historical record, while fans of fictional dramas claim that AI takes away job opportunities from human artists. However, some argue that generative AI is just a part of the evolution of filmmaking techniques. Balancing the creative potential of AI with transparency and regulation will be crucial for maintaining trust and authenticity in media.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-06T00:00:00	275696	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
44	515srpfA5Egx366v3PQdsT	Protect Yourself: How to Spot Fake Online Reviews Easily	Online reviews have become an essential tool for consumers to make informed decisions about purchases, restaurants, and movies. However, not all reviews can be trusted, as there are both honest and deceptive ones. Fake reviews can harm a product's reputation, with negative ones potentially submitted by competitors and positive ones posted by individuals with a financial interest. Companies like Amazon are using artificial intelligence (AI) to identify and prevent fraudulent reviews. Consumers can protect themselves by trusting their instincts, paying attention to language and tone, validating reviewers, and looking for patterns in reviews. AI technology can help websites flag potential fake reviews, maintaining the credibility of their review systems.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-06T00:00:00	248946	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
45	7Kzp0QIoXZejGa6Qy6buhV	Microsoft's New Transparency Report Reveals Bold Steps Towards Responsible AI	Microsoft has released its Responsible AI Transparency Report, which highlights the company's efforts in promoting responsible AI. The report showcases achievements such as developing 30 responsible AI tools, expanding the responsible AI team, and implementing risk assessment measures for generative AI applications. Notable additions include the Content Credentials feature, which adds a watermark to AI-generated images, indicating their origin. Microsoft has also provided Azure AI customers with tools to detect problematic content and offers security risk evaluation tools. However, Microsoft has faced controversies with its AI rollouts, including issues with the Bing AI chatbot and the creation of explicit deepfaked images. Microsoft acknowledges that responsible AI is an ongoing journey and will continue to prioritize its commitment to responsible AI.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-03T00:00:00	173583	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
46	012yin3IvpVSn1TPVT78ul	Generative AI and GDPR, Fines for Location Data Sharing and Updated Health Breach Notification Rule	Send us a Text Message.This week's Privacy Corner newsletter covers a range of important topics:Generative AI and GDPR: Privacy advocacy group noyb filed a complaint against OpenAI, alleging its AI tool ChatGPT violates user privacy by generating inaccurate personal data. The crux of the issue lies in whether noyb expects OpenAI to fix inherent limitations of the technology and the applicability of GDPR in this case.Fines for Location Data Sharing: The FCC penalized four major US wireless carriers nearly $200 million for allegedly sharing customers' location data with third parties without proper consent. This action reflects growing regulatory scrutiny around data privacy, especially concerning sensitive information like location.Updated Health Breach Notification Rule: The FTC finalized amendments to the Health Breach Notification Rule, expanding its scope to cover health apps and unauthorized disclosures of health information, not just security breaches. This highlights the evolving privacy landscape in the US healthcare sector.	2024-05-03T00:00:00	668473	The Privacy Corner	Join Robert each week as he navigates the ever-changing landscape of data breaches, surveillance, and individual rights, offering expert insights and actionable advice to help you take control of your digital footprint. Join him for lively discussions, in-depth interviews, and practical tips to protect your privacy in today's connected world.	Robert Bateman	44
47	0Au44hp2UekibiEcDmgCUX	Global AI Regulation Framework, Startup Support Without Equity, EU Messaging Privacy Concerns	Japanese Prime Minister Fumio Kishida introduced an international framework for regulating generative AI to manage its benefits and risks globally. Venture Mechanics, led by Ron Wiener, supports startups without taking equity, offering mentorship and connections to investors for under-represented founders. European Union lawmakers are under scrutiny from security experts for a proposal mandating messaging platforms to scan private messages for child sexual abuse material, with critics deeming it unfeasible and detrimental to Internet security and user privacy despite recent revisions made.	2024-05-03T00:00:00	939400	DonDraper's News	Your personalized daily news podcast	DonDraper's PocketPod	54
48	5pWq8hk5KJg5Z1ijvdx6oG	US Law Enforcement Using AI Tool Cybercheck Raises Concerns But Creator Claims 90% Accuracy Rate	Law enforcement agencies in the US are using an artificial intelligence tool called Cybercheck to aid in criminal investigations and prosecutions. However, defense lawyers have raised concerns about the tool's accuracy and reliability, arguing that its methodology is unclear and has not been independently verified. Cybercheck claims to use machine learning to search the internet for publicly available information to identify potential suspects and gather details for serious crimes. Its creator, Adam Mosher, states a 90% accuracy rate. The tool has been used in 8,000 cases across 40 states, but its reliability has been disputed in court cases, and defense lawyers are demanding transparency and independent verification to uphold due process.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-03T00:00:00	164806	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
49	4Jq9QK75io3hIxdy5yXG53	Beware the Deepfake Dangers: How AI Manipulation Threatens Truth	A recent criminal case in Maryland highlights the potential harm of deepfake technology. The case involved a high school principal who was framed as racist through a fake recording created using generative AI. This technology has become increasingly accessible, allowing anyone with internet access to manipulate audio, video, and images with ease. The rapid improvement of AI has made it difficult for human ears to detect manipulated audio, posing a significant concern for the spread of disinformation. Actions to address this issue could include stricter regulation, self-enforcement by AI providers, digital watermarks, and increased law enforcement and consumer education. However, it is essential to balance these efforts with the positive uses of AI and cultural differences in AI usage.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-03T00:00:00	287503	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
50	3CT2y8As3FOzoWZCnMDOz8	News outlets split on AI handling	The news industry is divided on how to handle artificial intelligence (AI) and is taking different approaches. Some major news outlets are partnering with AI companies, while others are filing lawsuits against them. Recently, eight regional U.S. newspapers joined the New York Times in suing Chat GPT's parent company, Open AI, and Microsoft for copyright infringement. However, some large news publishers like the Financial Times and the Associated Press (AP) have chosen to enter into paid arrangements with AI companies. The lack of a standardized marketplace for rates is posing challenges for news publishers to form profitable partnerships with AI companies. Meanwhile, tech companies are moving forward and using the data they need.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-03T00:00:00	288653	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
51	1LCuRJFwQA09lQMfVo0jPv	Tech Giants Embrace Synthetic Data for AI Training	Top technology companies, including Microsoft, Google, and Meta, are turning to synthetic data to obtain the large amounts of data needed to train their artificial intelligence (AI) models. Synthetic data, also known as fake data, is created by AI systems and can be used to train future versions of those same systems. This approach eliminates the need for licensed content and reduces legal, ethical, and privacy concerns. However, experts raise concerns about the risks associated with synthetic data, such as model collapse and the amplification of biases and toxicities. Despite the advantages, human intelligence is still crucial in creating and refining artificial datasets. Additionally, a mysterious chatbot called gpt2-chatbot has appeared, exhibiting performance similar to industry leader OpenAI's GPT-4. The developer's identity remains unknown.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-03T00:00:00	186488	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
52	6JUWNY6ylKinAVxl74rjP4	Newspapers Sue Open AI and Microsoft for Copyright Infringement	Eight daily newspapers owned by Alden Global Capital have filed a lawsuit against Open AI and Microsoft, alleging copyright infringement. The newspapers claim that the tech companies used millions of copyrighted articles without permission to train AI chatbots. The publications argue that this reduces the need for readers to subscribe, depriving the publishers of revenue. OpenAI stated that they were not aware of the concerns but are engaged in conversations with news organizations. The lawsuit is part of a broader fight over the use of data for generative AI and follows similar lawsuits by The New York Times. The case raises concerns about the impact on civic life in America.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-05-01T00:00:00	242494	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
53	337XTvmJop5JriojtfR0DP	Advocating for Stronger AI Regulations To Safeguard Civil Liberties with Congressman Joseph Morelle	On this episode, I am thrilled to sit down with Congressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.Key Takeaways:(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.(04:27) Deep fakes and their growing threat to privacy and integrity.(07:13) Introducing federal legislation against non-consensual deep fakes.(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.(19:46) The No AI Fraud Act and protecting individual likeness in AI use.(23:06) The importance of adaptable and 'living' statutes in technology regulation.(32:59) The critical role of continuous education and skill adaptation in the AI era.(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.Resources Mentioned:Congressman Joseph Morelle - https://www.linkedin.com/in/joe-morelle-8246099/No AI Fraud Act - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&r=9Preventing Deep Fakes of Intimate Images Act - https://www.congress.gov/bill/118th-congress/house-bill/3106Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard	2024-04-30T00:00:00	2418364	Regulating AI: Innovate Responsibly	Welcome to the Regulating AI: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.  You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.  Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!	Sanjay Puri	37
54	2URFO7QfLD59lscLX0yqAi	Government Regulation of Artificial Intelligence	 Rapid growth of AI demands government regulation to safeguard against misuse of private data. Global efforts are underway to address this critical issue.  The American College of Trust and Estate Counsel, ACTEC, is a professional society of peer-elected trust and estate lawyers in the United States and around the globe. This series offers professionals best practice advice, insights and commentary on subjects that affect the profession and clients. Learn more in this podcast.	2024-04-30T00:00:00	529968	ACTEC Trust and Estate Talk	The American College of Trust and Estate Counsel, ACTEC, is a professional society of peer-elected trust and estate lawyers in the United States and around the globe. This series offers professionals best practice advice, insights, and commentary on subjects that affect the profession and clients.	The American College of Trust and Estate Counsel	301
55	3e4qPJgLaeiTv2w03iyyo3	GDPR Violation: noyb Files Complaint Against Open AI for Inaccurate Personal Data	European data protection advocacy group noyb has filed a complaint against artificial intelligence company Open AI, accusing the company of violating the General Data Protection Regulation (GDPR) by failing to ensure accuracy of personal data processed by their language model, Chat GPT. Open AI has admitted that it cannot correct incorrect information generated by Chat GPT or disclose the sources of the data used to train the model. The complaint alleges that Chat GPT generated false information about individuals, violating the GDPR requirement for data accuracy. noyb is asking the Austrian Data Protection Authority to investigate and impose penalties.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-04-29T00:00:00	159686	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
56	416uJm7mJHq2dtw7Bp8omL	AI Regulation, Governance and Data Privacy Implications by Jenna Franklin	"In this episode on ""AI Regulation, Governance and Data Privacy Implications"", Jenna Franklin brings firsthand insights about the influence of landmark regulations like the EU AI Act and  discusses both the advantages and challenges these laws present to innovation and privacy. This episode shares essential strategies for developing effective AI governance programs that ensure responsible AI use and compliance with existing laws. It is imperative that organisations establish robust AI governance programs to effectively manage risk and ensure compliance as legal standards continue to evolve."	2024-04-26T00:00:00	1143006	Privacy & Security Insights with PICCASO	Welcome to Privacy Insights with PICCASO - A limited podcast series where we explore the latest challenges and ideas in the world of privacy and information security with members of the PICCASO Board. Our guest speakers will be publishing articles which we will be covering in each episode. Join us to get their take on the privacy and information security landscape.	Steve Wright	13
57	3qV3FLiMiGPCke1XmYHQKG	Meta unveils new strategy to address AI-generated content on social media platforms	Social media giant Meta, formerly known as Facebook, has unveiled a new approach to combatting fake news on its platforms. The company will now label a wider range of AI-generated content on Facebook, Instagram, and Threads as 'Made with AI' in an effort to protect freedom of speech while addressing misinformation. The decision to label instead of removing content comes after criticism from the Oversight Board concerning Meta's handling of a controversial AI-altered video of President Biden. The labeling system will be implemented from May 2024, allowing users time to adjust to the notifications and reducing content removal based on the existing manipulated video policy by July 2024.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-04-25T00:00:00	203624	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
58	5FaUUG6T5wWreczAJqHVkK	AI Revolution - EU's Groundbreaking Regulations	In a dynamic new episode of the podcast series sponsored by Averest Training, host Demre Sirel of Databulls engages with AI Law Expert and attorney Selin Cetin Kumkumoglu to unpack the European Union's latest legislation on artificial intelligence (AI). The discussion centers on how this pivotal legislation sets stringent standards for transparency, accountability, and ethical practices in AI, aiming to safeguard consumer rights and promote secure, innovative AI deployment across EU states. Hosted on Acast. See acast.com/privacy for more information.	2024-04-23T00:00:00	1849182	Tech Tales with Demre Sirel	Join us, the Databulls Team, as we ignite the global stage every other week, delving into the realms of technology, artificial intelligence, cybersecurity, and fintech with trailblazing international guests. Don't miss out! Follow us to stay ahead of the curve! Hosted on Acast. See acast.com/privacy for more information.	Demre Sirel	1
59	2LEbugXcVsBsL3dfRd4GWg	AI-Generated Child Sexual Abuse Material Overwhelming Authorities	A new report from Stanford University's Internet Observatory reveals that the National Center for Missing and Exploited Children is ill-prepared to combat child sexual abuse material (CSAM) generated by artificial intelligence (AI). Criminals are using AI technology to create explicit images, making it difficult for authorities to identify and rescue real victims. The CyberTipline, which collects reports on CSAM, is overwhelmed by incomplete and inaccurate tips and the sheer volume of reports. The report calls for updated technology and laws to address this crime, and lawmakers are already working to criminalize the use of AI-generated explicit content. The report emphasizes the urgent need for increased funding and improved access to technology for the National Center for Missing and Exploited Children.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-04-23T00:00:00	176274	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
60	0GfD955HtktoEvIU9srnUX	LW - AI Regulation is Unsafe by Maxwell Tabarrok	Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Regulation is Unsafe, published by Maxwell Tabarrok on April 22, 2024 on LessWrong. Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be. There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests. Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives to care about long term, global, costs or benefits and they do have strong incentives to push the development of AI forwards for their own purposes. Noticing that AI companies put the world at risk is not enough to support greater government involvement in the technology. Government involvement is likely to exacerbate the most dangerous parts of AI while limiting the upside. Default government incentives Governments are not social welfare maximizers. Government actions are an amalgam of the actions of thousands of personal welfare maximizers who are loosely aligned and constrained. In general, governments have strong incentives for myopia, violent competition with other governments, and negative sum transfers to small, well organized groups. These exacerbate existential risk and limit potential upside. The vast majority of the costs of existential risk occur outside of the borders of any single government and beyond the election cycle for any current decision maker, so we should expect governments to ignore them. We see this expectation fulfilled in governments reactions to other long term or global externalities e.g debt and climate change. Governments around the world are happy to impose trillions of dollars in direct cost and substantial default risk on future generations because costs and benefits on these future generations hold little sway in the next election. Similarly, governments spend billions subsidizing fossil fuel production and ignore potential solutions to global warming, like a carbon tax or geoengineering, because the long term or extraterritorial costs and benefits of climate change do not enter their optimization function. AI risk is no different. Governments will happily trade off global, long term risk for national, short term benefits. The most salient way they will do this is through military competition. Government regulations on private AI development will not stop them from racing to integrate AI into their militaries. Autonomous drone warfare is already happening in Ukraine and Israel. The US military has contracts with Palantir and Andruil which use AI to augment military strategy or to power weapons systems. Governments will want to use AI for predictive policing, propaganda, and other forms of population control. The case of nuclear tech is informative. This technology was strictly regulated by governments, but they still raced with each other and used the technology to create the most existentially risky weapons mankind has ever seen. Simultaneously, they cracked down on civilian use. Now, we're in a world where all the major geopolitical flashpoints have at least one side armed with nuclear weapons and where the nuclear power industry is worse than stagnant. Government's military ambitions mean that their regulation will preserve the most dangerous misuse risks from AI. They will also push the AI frontier and train larger models, so we will still face misalignment risks. These may ...	2024-04-22T00:00:00	484393	The Nonlinear Library	The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org	The Nonlinear Fund	2398
61	1mL36GyMxbEnY20XOpntL7	[Linkpost] AI Regulation is Unsafe by Maxwell Tabarrok	This is a linkpost for https://www.maximum-progress.com/p/ai-regulation-is-unsafeConcerns over AI safety and calls for government control over the technology are highly correlated but they should not be.There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests.Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives to care about long term, global, costs or benefits and they do have strong incentives [...] ---Outline:(01:18) Default government incentives(05:40) Negative Spillovers---           First published:           April 22nd, 2024                   Source:         https://www.lesswrong.com/posts/3LuZm3Lhxt6aSpMjF/ai-regulation-is-unsafe                 Linkpost URL:https://www.maximum-progress.com/p/ai-regulation-is-unsafe       ---         Narrated by TYPE III AUDIO.	2024-04-22T00:00:00	553440	LessWrong (30+ Karma)	Audio narrations of LessWrong posts.	LessWrong	500
62	0FdJrcPfhASva09OVqJQbS	AI Governance in the Spotlight: European VCs and CEOs Warn of Over-Regulation Risks	Shine a spotlight on the evolving landscape of AI governance as European VCs and CEOs caution against the risks of over-regulation. Delve into the ongoing discussions shaping the future of artificial intelligence and its impact on innovation. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai   	2024-04-12T00:00:00	582635	LLM	"""LLM"" is a dynamic podcast that dives deep into the latest developments and topics in law, technology, and innovation. Each episode brings you comprehensive analysis and insightful discussions on groundbreaking legal issues and tech advancements. Join us as we explore the intersection of law and technology, offering listeners a unique perspective on how these fields are evolving and impacting our world. Stay informed and engaged with ""LLM,"" your go-to source for the most relevant news and topics in the legal and tech spheres."	LLM	490
63	2eZ2b4XHEJzHHnUtoImzM5	Tech Transparency Unveiled: AI Survey Reveals Public Distrust, Ignites Call for Regulation	Unveil the transparency in tech as an AI survey exposes public distrust, igniting a resounding call for regulatory measures. Join this episode to explore the survey's revelations, the need for transparency in AI, and the implications for the future of tech governance.  #TechTransparency #AISurveyRevelations  Get on the AI Box Waitlist: https://AIBox.ai/ Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/ Follow me on Twitter: https://twitter.com/jaeden_ai   	2024-04-10T00:00:00	685127	ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI	The ChatGPT podcast is a cutting-edge podcast exploring the intersection of artificial intelligence and technology. Each episode delves into how AI is revolutionizing the industry, from personalization to data analysis, featuring insights from leading experts and real-world case studies.	ChatGPT	492
64	3NYGc0pdH6yTgPRjNBdY3H	AI Regulation	Various regions adopt diverse approaches to regulating AI. The EU is pursuing a comprehensive AI Act to mitigate risks such as bias and algorithmic transparency. Conversely, the US has a fragmented approach, with agencies targeting specific AI aspects like data privacy. China emphasizes regulations concerning national security and social control through AI. These disparate approaches reflect the multifaceted nature of AI governance on a global scale.  	2024-04-10T00:00:00	338176	Learn2upskill	Learn2upskill is the platform where we can access bite-sized tech talks	learn2upskill	16
65	2FFj1boTdhe3UZPobCJnju	HS069: Regulating AI	In todays episode Greg and Johna spar over how, when, and why to regulate AI. Does early regulation lead to bad regulation? Does late regulation lead to a situation beyond democratic control? Comparing nascent regulation efforts in the EU, UK, and US, they analyze socio-legal principles like privacy and distributed liability. Most importantly, Johna drives... Read more 	2024-04-09T00:00:00	1744510	Heavy Strategy	From technology to workplace culture, from geopolitical trends to economics, Heavy Strategy debates pivotal questions in enterprise IT. Hosts Greg Ferro and Johna Till Johnson bring their technical expertise, analytical acumen, and contrasting viewpoints to discuss complex topics of interest to IT leaders. Frequently irreverent and always thought-provoking, these are the conversations you wish you could have at the leadership table. Tune in and join the think tank, where unanswered questions are better than unquestioned answers.	Packet Pushers	50
66	0e96QQ19xeyuNobnb0LcEc	Paul Schmetzler: FDA Regulations, AI and Legal Risk	We chatted with the partner at Clark Hill PLC about AI, FDA regulations, and cybersecurity legal risks, based on his years of experience learning the legal aspects of healthcare and industrial cybersecurity	2024-04-09T00:00:00	2469468	Left to Our Own Devices	We just announced Left to Our Own Devices: The Conference - the first virtual conference dedicated to product security, taking place on April 3rd, 2024. Save your spot for free at Cybellum.com/conference  -------------------------------------  Introducing Left to Our Own Devices - the podcast dedicated to everything product security.   Every other week, we will be talking with a different cybersecurity policymaker, engineer, or industry leader to hear their war stories and get their insider tips for surviving the product security jungle.   From Medical SBOMs, to WP. 29 and the latest industrial security threats, this is your place to catch up and learn from the pros.  Left to Our Own Devices is brought to you by Cybellum. To learn more, visit Cybellum.com	Cybellum Technologies LTD	55
67	4rmowF9a4XAeZ1d06VYmg4	The One on Navigating the EU Data & AI Regulations	In this episode of TechTalk, we explore how organisations are gearing up for the Data Act and the Artificial Intelligence (AI) Act, which are part of the European Unions Digital Strategy and set clear guidelines for fair data access and ethical AI use across the Union. More precisely, we discuss the operational shifts required, the potential hurdles in compliance, and the strategies to overcome them, among others. Our guests are Dr. Jan-Peter Ohrtmann, Partner at PwC Legal Germany and the Global Privacy & Cyber Legal Network Lead, and Dr. Saharnaz Dilmaghani, Artificial Intelligence & Data Science Senior Associate at PwC Luxembourg.	2024-04-08T00:00:00	2368260	PwC Luxembourg TechTalk	"Ah, technology. To quote Steve Jobs, ""we have no idea how far it's going to go"". Whether you're the savviest techy around, a complete tech-dummy or somewhere in between, join Carla Santos, a journalist eager to learn more about tech-related matters, and Christopher Rossa, who continuously seeks to demystify technology, during their monthly conversations with specialists."	PwC Luxembourg	115
68	0YVuPR23oVhkGndjJXtGiA	UK Elections 2024: Cyber Experts Warn of AI-Powered Disinformation Threat	The United Kingdom is gearing up for local and general elections in 2024, which are expected to be highly contentious. Cyber experts warn that malicious actors may target these elections through disinformation campaigns aided by artificial intelligence (AI). Disinformation has been a significant cyber risk in previous UK elections, and state-backed cyberattacks are also expected to increase leading up to the upcoming elections. AI-generated deepfakes, synthetic images, videos, and audio, make it easier for malicious actors to spread false information. Cybersecurity experts stress the need for increased awareness and international cooperation to mitigate the threat of AI-powered disinformation. Tech giants like Meta, Google, and TikTok will play a crucial role in combatting deepfakes and preventing the spread of misinformation.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-04-08T00:00:00	242912	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
69	3nVDSK9CZHFw9FqS40vXLW	William Covington, UW School of Law, on Evolving AI Regulations	Join Professor William Covington, Director of the Technology Law and Public Policy Clinic at the University of Washington School of Law as he navigates the waters of technology, law, and public policy in AI.Professor Covington and hosts Alex Alben and Patrick Ip discuss the evolving landscape of AI regulation, emphasizing nuanced understanding and proactive safety measures in model deployment. We explore emerging AI applications like the humane AI pin, addressing privacy concerns in physical spaces.For more discussion, news, and thinking about responsible AI, visit our website, The AI Forum.	2024-04-08T00:00:00	3560063	Responsible AI from The AI Forum	Engage in critical dialogue with legal minds and tech experts on AIs legal and societal impacts in Responsible AI, a podcast from the AI Forum. Our episodes feature experts in cybersecurity, law, and technology, offering deep dives into public policy and best practices. Tune in for a meeting of minds that shapes the future of AI governance.Responsible AI is hosted by Alex Alben and Patrick Ip, Directors of The AI Forum. Learn more at our website, theaiforum.org.	Alex Alben, The AI Forum	4
70	2uecJPM6mAmmZfI8wXbTIh	Judge in Washington bans AI-enhanced videos as trial evidence	A judge in Washington has banned the use of AI-enhanced videos as evidence in a trial, stating that it could lead to confusion and disputes due to the non-peer-reviewed process used by AI technology. The ruling came in a case involving a man accused of killing three people, whose lawyers sought to introduce AI-enhanced cellphone video footage. The judge's decision highlights the challenges faced by lawmakers and legal professionals in incorporating AI into the justice system. It also emphasizes the need for clear guidelines and standards for the admissibility of AI-enhanced evidence in court. The ruling comes as governments work on policies to address AI risks.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-04-05T00:00:00	178050	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
71	3Tfg0dRBWRmdPwG3JQ0fs1	Debating GPT-4 Reduced Conspiracy Theories	A study by MIT and Cornell showed that debating with GPT-4 reduced believers' confidence in conspiracy theories by 20%. Participants engaged in text conversations with GPT-4, which presented tailored counter-evidence, significantly decreasing conspiratorial beliefs. The effect persisted over months and worked across various conspiracy theories, highlighting GPT-4's potential for positive impacts and the importance of responsible use due to its persuasive power. Today's Episode Brought to You By: Plumb - Build, test, and deploy AI features with confidence - https://useplumb.com/ ** Be the first to learn about our new AI education platform: https://besuper.ai/ ** ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter:https://theaibreakdown.beehiiv.com/subscribe   Subscribe to The AI Breakdown on YouTube:https://www.youtube.com/@TheAIBreakdown   Join the community:bit.ly/aibreakdown   Learn more:http://breakdown.network/	2024-04-05T00:00:00	916897	The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis	A daily news analysis show on all things artificial intelligence. NLW looks at AI from multiple angles, from the explosion of creativity brought on by new tools like Midjourney and ChatGPT to the potential disruptions to work and industries as we know them to the great philosophical, ethical and practical questions of advanced general intelligence, alignment and x-risk.	Nathaniel Whittemore	385
72	1ZTPSX8FuGonmSmTguWB8v	"R Kelly: Sean ""Diddy"" Combs Defender // Trump Sues Truth Social Co-Founders/Former Apprentice Contestants // The Good News: Stolen Okinawan Art Being Given to Authorities // Election 2024 AI and Deepfake Regulations and Company Policies"	""	2024-04-03T00:00:00	1416568	What's Eating Cale	A unique perspective on everything from Entertainment and Non-Political Headlines to some of the biggest challenges and questions facing every human being.	Cale Guin	276
73	3GysW3fcxb0tcB2zPECwfI	AI Manipulation Battle: TrueMedia vs Deepfakes, Discord's Record Breaker, AI Resurrects George Carlin, Music Stars vs Deepfake, Google's AI Initiative for Nonprofits, and Global AI Regulation Moves	"""Dive into the world of AI in Marketing with Nytro Marketing. In today's episode, we discuss how one AI researcher shifts gears to battle AI-manipulated content in politics, a breakdown of how Discord's April Fools' joke smashed YouTube records, and the legal dilemmas surrounding the use of AI to replicate voices, as highlighted by a recent lawsuit involving comedian George Carlins estate. Plus, we explore why famous musicians are rallying against deepfake technology and discuss Google's initiative to equip nonprofits with AI tools. Finally, we dissect a groundbreaking international partnership between the US and UK focusing on AI safety standards. Catch these exciting news updates and more on 'AI in Marketing with Nytro Marketing'!"" Transcript: Today is Wednesday, April 3rd- Here is what we are covering - AI researcher, Oren Etzioni, is battling the surge of AI-manipulated content that could potentially influence the outcome of upcoming elections. Etzionis non-profit, TrueMedia.org, has just released a set of free tools designed to detect and identify digital disinformation aimed at aiding journalists, fact-checkers, and the public in discerning real online content from fake content. However, even with these tools, Etzioni expressed concern about the rapid advancement of AI technology, which could lead to more convincing deepfakes, making detection increasingly more complex. As the global community gears up for several key elections in the coming year, Etzioni paints a grim image of a future potentially overwhelmed by AI-generated misinformation. -  On April 1st, 2024, Discord, a widely used messaging platform, shattered YouTube records by garnering over one billion views in 24 hours with an April Fools joke, a video about a new ""Loot Boxes"" feature. Despite the video only being 18 seconds long and the feature not being serious, it attracted an unprecedented number of views, surpassing the records held by popular YouTuber MrBeast and Grand Theft Auto VI's trailer. Speculations suggest this anomaly occurred due to the video being embedded in a pop-up and auto-playing on loop on Discord, unintentionally functioning like a ""working YouTube view bot"", as described by software developer Marvin Witt. -  In a recent verdict, the estate of famed comedian George Carlin has settled a podcast that utilized artificial intelligence to recreate his voice. It indicates the new legal terrain being navigated as AI technology allows for the digital resurrection of voices. With this case, legal and ethical lines are getting defined around the issues of voice rights and posthumous digital representation. -  Renowned musicians including Billie Eilish, Nicki Minaj, and Stevie Wonder are rallying together to demand greater defense against artificial intelligence. Deepfake technology, which uses AI to manipulate voices and faces, threatens the authenticity of their music and potential licensing issues. Their request illustrates the urgent need for updated copyright laws to address the evolving challenges posed by technological advancement. -  As AI-integrated tools revolutionize industries, Google is stepping up to ensure nonprofits aren't left behind by launching the Google.org Accelerator: Generative AI. The six-month crash course supports high-impact nonprofits with both funding and education, helping the organizations integrate AI-powered tools into their operations. This includes projects such as AI-powered assistants, search interfaces, and even chatbots designed for non-English speakers. -  The US and UK have partnered to address safety risks associated with advanced AI models, requiring companies to report safety test results and allow the vetting of their tools. The two nations will also collaborate on research, personnel exchanges, and information sharing through this agreement. This comes as part of a global move towards stringent AI regulations, with the EU also recently passing comprehensive regulations on AI safety standards. -  This podcast, art, and text are AI-generated."	2024-04-03T00:00:00	260225	Stan Berteloot - AI & Marketing Daily	Join us daily for the AI in Marketing Daily - your concise update on the latest trends and news in artificial intelligence and marketing, curated every morning by Stan Berteloot from Nytro Marketing.	Stan Berteloot	52
74	3SyJ2ny61Cq1XH2Lu3HGuj	Gareth Stokes, DLA Piper: How AI Shapes Law and Regulation	Join us for a compelling episode as we engage with Gareth Stokes, a distinguished figure at DLA Piper, to explore the profound influence of AI on the realms of law and regulation. Delve into the visionary insights and innovative strategies that are reshaping the legal landscape through the power of artificial intelligence. Gain invaluable perspectives on the dynamic interplay between technology and the legal sector in this must-listen podcast conversation.  Get on the AI Box Waitlist:https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai  	2024-03-31T00:00:00	1997519	ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning 	Welcome to the ChatGPT podcast, where we explore the exciting and ever-evolving world of artificial intelligence. Each episode, we invite experts and thought leaders in the field to share their insights and experience working with AI.  We delve into a range of topics, from the latest research and developments in machine learning and natural language processing, to the ethical considerations of AI. Our guests bring a diverse set of perspectives and come from a variety of industries, including academia, tech, and business.	Jaeden Schafer	503
75	6jd21J0TMJxdEBktOmEwUs	AI chatbot in NYC providing misleading information; raises concerns about legality	New York City's AI-powered chatbot intended to assist business owners in understanding government regulations has been found to provide misleading information that may encourage illegal activities. After five months of its launch, the chatbot has been observed to offer incomplete and dangerously inaccurate guidance on housing policies, worker rights, and rules for entrepreneurs. An example is its incorrect information that landlords are not obliged to accept tenants with Section 8 vouchers or rental assistance, whereas it is illegal to discriminate based on income source in the city. This finding raises concerns about the chatbot's reliability and underscores the importance of testing and verification before implementing AI-powered systems.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-29T00:00:00	133146	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
76	74AVGAEJJSVbmjzRcDjMuA	AI Regulation in Action: Consumer Protection Agencies Investigate ChatGPT	Explore AI regulation in action as consumer protection agencies investigate ChatGPT's practices, paving the way for ethical AI deployment and user protection. Join us as we discuss the challenges and opportunities of regulating AI technologies.  Get on the AI Box Waitlist: https://AIBox.ai/ AI Facebook Community: https://www.facebook.com/groups/739308654562189Podcast Studio AZ: https://podcaststudio.com/mesa-studio/ Podcast Studio Network: https://PodcastStudio.com/   	2024-03-29T00:00:00	386890	Sam Altman Podcast	The Sam Altman Podcast is a forward-thinking podcast dedicated to exploring the latest trends and breakthroughs in artificial intelligence and technology and specifically what Sam Altman is doing in. Each episode delves into current topics and news, offering listeners in-depth analysis and insights into the evolving world of AI. The show aims to enlighten both enthusiasts and professionals by discussing the implications and future of artificial intelligence in various industries. Tune in to for a comprehensive and engaging journey through the dynamic landscape of AI and technology.	Sam Altman Podcast	92
77	2ytNuKUGArPccapAxl8Y8e	Uber Eats Courier Wins Settlement Over Racial Discrimination	A recent case involving a Black Uber Eats bike courier, Pa Edrissa Manjang, who faced racial discrimination due to facial recognition checks and received a settlement from Uber. The use of facial recognition checks resulted in Manjang being unable to access the Uber Eats app, causing him to lose job opportunities. The case raises concerns about the lack of transparency and rushed implementation of automated systems, which can lead to biased outcomes. It also highlights the need for stronger enforcement of existing laws and the implementation of dedicated AI safety legislation to address AI bias and protect against discrimination and human rights abuses.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-29T00:00:00	314305	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
78	0KHy4sHD07Rw3JByFuKl1X	AI Misuse in Schools Sparks Outrage and Calls for Regulation	The growing popularity of artificial intelligence (AI) is causing concern in schools as instances of its misuse emerge. One high school student in Illinois used AI to create and circulate nude photos of classmates, sparking outrage among parents and prompting calls for stricter regulation. Similar incidents have been reported across the country, leading to a debate about policing AI use in schools. Some states have implemented laws against non-consensual creation of explicit images using AI, but legal experts argue that charging young individuals with felonies is excessive. Experts recommend that schools establish clear rules to prevent future misuse of AI technology.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-28T00:00:00	193515	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
79	4ANVquDmUVWTvon9PjGC4p	Biden Administration Unveils New AI Regulations	The Biden administration has introduced three new policies to regulate the use of artificial intelligence (AI) within the federal government. These policies address concerns about the risks associated with AI and aim to protect citizens while promoting responsible implementation. The policies require federal agencies to ensure that their use of AI does not jeopardize the rights and safety of Americans, publish a list of AI systems being used and assess associated risks, and designate a chief AI officer to oversee AI utilization. The administration hopes that these policies will serve as a blueprint for global action on AI regulation.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-28T00:00:00	267676	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
80	5m5zIHFJ0Qdz18OEp3gnJN	AI governance and policy (Article)	Todays release is a reading of our career review of AI governance and policy, written and narrated by Cody Fenwick.Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks, and there are opportunities in the broad field of AI governance to positively shape how society responds to and prepares for the challenges posed by the technology.Given the high stakes, pursuing this career path could be many peoples highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.If you want to check out the links, footnotes and figures in todays article, you can find those here.Editing and audio proofing: Ben Cordell and Simon MonsourNarration: Cody Fenwick	2024-03-28T00:00:00	3065286	80,000 Hours Podcast	Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them.   Subscribe by searching for '80000 Hours' wherever you get podcasts.   Produced by Keiran Harris. Hosted by Rob Wiblin and Luisa Rodriguez.	Rob, Luisa, Keiran, and the 80,000 Hours team	239
81	0fi0zZpxezTr3Jubh3w85K	Turkey on high alert for AI-generated disinformation ahead of elections	"Concerns are rising in Turkey ahead of the nationwide local elections on March 31, as disinformation and fake media created through artificial intelligence (AI) are becoming more prevalent. Manipulated images and videos are being used by some politicians for electoral advantage, raising fears about media manipulation in an election where the ruling Justice and Development Party (AK Party) is aiming to retake cities won by the opposition in 2019. The director of fact-checking project Teyit warns that ""cheap fake"" videos are more common than AI-created disinformation and pose a significant threat. The Turkish government passed a law last year criminalizing the dissemination of false information to combat disinformation.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-03-28T00:00:00	240169	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
82	6yK1Mg0xZKtdKaUR9gxeGx	"Biden administration weighs ""nutritional labels"" for AI tech products"	"The Biden administration is considering the introduction of ""nutrition labels"" for new tech products utilizing artificial intelligence (AI). These labels would provide standardized descriptions for AI systems, similar to food labeling by the Food and Drug Administration. The Department of Treasury and Department of Commerce have released reports examining the risks of AI and are exploring the possibility of implementing these labels. Challenges may arise due to differing opinions on defining AI. The Treasury Department plans to collaborate with various organizations to investigate AI's impact and address risk and fraud issues, with the aim of enhancing transparency and accountability in AI technology use.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-03-28T00:00:00	143960	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
83	1RPOOgs1VVBQRnXYdry1YY	California Eyes Europe's Tough Approach on Big Tech and AI Regulation	California aims to learn from Europe's tough-on-big-tech approach to AI regulation, with at least 30 proposed bills addressing various aspects of the technology. Inspired by the European Union's (EU) recent legislative actions on AI, such as the recently passed AI Act, California state legislators are proposing laws ranging from disclosing data used to train models to banning election ads with computer-generated features. Some proposed bills also focus on regulating deepfakes and non-consensual deepfake pornography. While industry association NetChoice opposes importing European-style AI regulation into the United States, Adobe supports the EU's risk-based approach to AI. California's efforts in AI regulation may set a precedent for future regulation across the US.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-27T00:00:00	187245	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
84	2F6WkMLgKKj4jlZAFh8z1f	Unveiling Public Sentiment: AI Survey Exposes Distrust in Tech Firms, Sparks Call for Regulation	Uncover the nuances of public sentiment as an AI survey reveals widespread distrust in tech firms, igniting a call for regulatory measures. Join this episode to delve into the survey findings, explore the reasons behind the distrust, and discuss the growing demand for regulations in the AI landscape.  #AISurveyInsights #TechRegulationDemand  Get on the AI Box Waitlist: https://AIBox.ai/ Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/ Follow me on Twitter: https://twitter.com/jaeden_ai   	2024-03-27T00:00:00	685127	Artificial Intelligence: AI News, ChatGPT, OpenAI, LLM, Anthropic, Claude, Google AI	Artificial Intelligence is a podcast exploring the world of AI and its impact on society. Join us as we delve into the latest developments, breakthroughs, and controversies in the field. Each episode, we'll hear from experts, practitioners, and thinkers who are shaping the future of AI. Get a better understanding of how this rapidly growing technology is changing the way we live and work, and what its future might hold. Tune in to Artificial Intelligence, where technology meets humanity.	Jaeden Schafer	518
85	6m9aNN8i70UzlkNgaHMydG	How Large Language Models Are Revolutionizing Finance	Large Language Models (LLMs) have the potential to transform the finance sector within the next two years, according to research by The Alan Turing Institute. LLMs can enhance efficiency and safety by quickly analyzing data and generating coherent text, enabling them to detect fraud, generate financial insights, and automate customer service. The technology also has applications in healthcare, law, and education. The finance industry is already deploying LLMs for tasks including reviewing regulations and delivering advisory and trading services. However, challenges remain around regulatory compliance and the need for explanations and predictable outputs. Collaboration between professionals, regulators, and policymakers is encouraged to address these concerns.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-27T00:00:00	237714	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
86	3jen0AWVur35IccIxpRCMn	AI Regulation Conundrum: A Call for Balance	 As the dawn of artificial intelligence (AI) reshapes the horizon of our technological landscape, the global dialogue pivots to a critical junction: how do we regulate a force potent enough to redefine humanitys future? The emerging consensus suggests a cautious tread; the path is fraught with both unprecedented opportunity and speculative peril.  	2024-03-26T00:00:00	154900	Codefact Daily	Keep ahead in technology and business with our weekly newsletter. We offer sharp analysis and data-driven insights on key trends.	Codefact Oy	100
87	6DWkhSLooUYdgh6iEObuDf	AI Regulation at a Crossroads	This podcast examines the raging debate around not only if and how to regulate AI, but also when and by whom. Our experts cover such topics as the potential risks of AI, different regulatory approaches to AI, and the need for global AI governance. Guest experts include Jenny Carmichael, VP Compliance at Entrust and Stephanie Derdouri, VP Security, Governance, Risk and Compliance at Entrust.	2024-03-25T00:00:00	1574063	The Cybersecurity Institute Podcast, by Entrust	The Cybersecurity Institute Podcast by Entrust lets IT and business leaders listen in on smart cybersecurity conversations on what you need to know to protect, adapt and grow your enterprise. Episodes feature Entrust experts and guests speaking to topics ranging from ransomware and nation-state threats to unique perspectives on multi-cloud, zero trust, securing the supply chain, prioritizing infosec investments, decentralized identity, and more	Entrust	27
88	33wYGMjvmmFVuGfIKvgaD1	Unlocking AI's Potential in Sexual and Reproductive Health	The World Health Organization (WHO) and the UN Special Programme on Human Reproduction (HRP) have released a technical brief on the use of artificial intelligence (AI) in sexual and reproductive health and rights (SRHR). The brief highlights the opportunities and risks that AI presents in this field, including AI's potential to make sexual and reproductive services more accessible. However, there are concerns over data breaches, bias, unequal access, and misinformation. The brief suggests actions such as revisiting data protection regulations, ensuring diversity in training data and development teams, and addressing misinformation. Ethical and inclusive development and regulation of AI in SRHR are crucial.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-25T00:00:00	249417	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
89	520AM983tdEh8n66VZGx6L	US House Introduces Bill to Regulate AI-Generated Media	A bipartisan bill has been introduced in the US House of Representatives that aims to regulate the use of artificial intelligence (AI) by requiring the identification and labeling of AI-generated online media such as images, videos, and audio. The legislation seeks to address concerns over deepfakes, which are AI-generated media that can replicate real content. AI developers would be required to embed digital watermarks or metadata in their AI-generated content, while online platforms like TikTok and Facebook would be obligated to label such content as AI-generated. The bill also includes provisions for civil lawsuits against violators. The legislation reflects a growing recognition of the need for accountability and trust in AI development.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-24T00:00:00	243670	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
90	1si18D4aIlL5XBSfBSGlFX	New Bill Targets AI-Generated Content Deception	A bipartisan bill has been introduced in the House of Representatives that aims to tackle the issue of AI-generated content and the potential for deception and misinformation. The bill would require AI developers to include digital watermarks or metadata in content created using their technology, and online platforms would be responsible for labeling the content to inform users that it was generated using AI. The proposed legislation aims to address concerns about deepfakes and ensure transparency for the American public, while also recognizing the importance of allowing the field of AI to continue developing and benefiting various industries.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-21T00:00:00	213237	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
91	0hHY4WnZxe9yN9Acn6hMEf	India at the crossroads: Balancing AI Innovation with competition regulation	Artificial Intelligence (AI) has ushered in a profound transformation across industries, fundamentally altering competition dynamics. However, alongside its immense potential, AI presents intricate regulatory challenges, particularly in the realm of competition. As AI systems advance, concerns about market dominance, consumer welfare, and fair competition have sparked global regulatory scrutiny. A primary issue is the consolidation of power among tech giants leveraging AI. These companies enjoy substantial advantages in data access, processing capabilities, and algorithmic insights, enabling them to dominate markets and potentially stifle competition through tactics like predatory pricing and exclusionary practices. Various jurisdictions have taken divergent approaches to tackle these challenges. In the EU, regulators have been proactive in overseeing AI-driven markets, with initiatives like the General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA) aimed at fostering competition, preventing market abuse, and safeguarding consumer rights. Similarly, US authorities have intensified scrutiny on tech giants wielding AI, launching investigations into potential anti-competitive behavior and pursuing antitrust enforcement actions. Conversely, some Asian jurisdictions, notably China, have embraced a more lenient regulatory stance, prioritizing innovation and economic growth over competition concerns. While this approach has facilitated AI development, it has also raised apprehensions about transparency, accountability, and fair competition, prompting calls for heightened regulatory oversight. The regulatory landscape surrounding AI and competition is complex and evolving, shaped by diverse legal frameworks, market dynamics, and geopolitical factors. While some jurisdictions prioritize stringent antitrust enforcement and consumer protection, others emphasize innovation and market liberalization. In the context of India, the approach to AI regulation within the competition framework warrants examination. Has there been a concerted effort to align with regulatory practices observed in developed nations? Should India focus on curbing anti-competitive behavior in the AI sector or adopt a more permissive approach, prioritizing innovation and economic growth? Does the recent draft digital Competition Bill proposed by the Committee on Digital Competition Law (CDCL) cover regulation of AI? These questions underscore the paramount challenge for policymakers worldwide: striking a delicate balance between fostering innovation and preserving competition as AI continues to permeate various sectors of the economy. Listen in to the BL Podcast with Dinoo Muthappa, Partner in the New Delhi office of TT&A (Talwar, Thakore & Associates), to get insights on CCI in an AI era	2024-03-20T00:00:00	1884499	BusinessLine Podcasts	Listen to all that you wish to learn about the business world, including mergers and acquisitions, economic policies, start-up companies, technology, agriculture, banking, politics, international affairs and entertainment.  Log on to: www.thehindubusinessline.com	BusinessLine	1029
92	1VisW1sx3Sb173Vlte8pde	French Regulators Slap Google with 250 Million Fine	Google has been fined 250 million by French regulators for not negotiating fair licensing deals with media companies and for using their articles to train its artificial intelligence (A.I.) algorithms without informing them. This violation of a previous agreement has sparked a larger conflict between Google and publishers regarding compensation for the use of news content in search results. The struggle is not exclusive to Google, as Meta, the owner of Facebook and Instagram, is also facing objections from media outlets for using their articles to train A.I. systems. French regulators have supported publishers' claims of unfair profiting from their content without adequate compensation. Google, though disputing the fine's proportionality, has agreed to pay it and aims to collaborate positively with publishers.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-20T00:00:00	132257	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
93	3kYDRCBjmwMSNUVLEVsFX5	Kara Swisher on China's TikTok Agenda, Media's AI Risk, and Tech Regulation	Kara Swisher holds nothing back as she shares her POV on the future of media & tech. The award-winning tech journalist discusses what Donald Trump got right about Meta, why we can't trust China with TikTok, the possibility of running for political office, and the problem with a culture that turns rich people into gods. Plus, Swisher shares other highlights from her new book, 'Burn Book: A Tech Love Story.'   Hosted by Ann Berry.  Chapters 0:00 Kara's career 3:40 POV journalism 5:25 Kara for office? 7:44 China vs. US mindset 12:18 Trump vs. Meta 13:58 TikTok's future 17:09 State of media 19:45 Strong AI in media 24:20 Future of journalism  The content of the video is for general and informational purposes only. All views presented in this show reflect the opinions of the guest and the host. You should not take a mention of any asset, be it cryptocurrency or a publicly traded security as a recommendation to buy, sell or hold that cryptocurrency or security.   Guests and hosts are not affiliated with or endorsed by Public Holdings or its subsidiaries. You should make your own financial and investment decisions or consult respective professionals. Full disclosures are in the channel description. Learn more at Public.com/disclosures.  Past performance is not a guarantee of future results. There is a possibility of loss with any investment. Historical or hypothetical performance results, if mentioned, are presented for illustrative purposes only. Do not infer or assume that any securities, sectors or markets described in the videos were or will be profitable.   Any statements of future expectations and other forward-looking statements are strictly based on the current views, opinion, or assumptions of the person presenting them, and should not be taken as an indicator of performance nor should be relied upon as an investment advice.	2024-03-19T00:00:00	1733098	Leading Indicator	Interviews with leaders in business, finance, and technology. Hear from experts, analysts, & economists as they break down the top market headlines, dive into strategies of public companies, and uncover tech trends. A podcast by Public.com.   Public.com allows you to invest in stocks, crypto, and alternative assets.   Get started by signing up at Public.com with code PODCAST.  This content is for informational purposes and is not investment advice. Open To The Public Investing, member of FINRA and SIPC. This content is not investment advice. Investing involves risk of loss.	Public.com	16
94	7k2bKZPyNyztEV1FWFn4ff	Ethical AI - Regulation, Kommunikation und unser Untergang? | Johannes Lierfeld	Wenn zwei Menschen sich schon nicht unterhalten knnen ohne Raum fr Missverstndnisse zu lassen, wie soll es dann zwischen Mensch und Maschine klappen? Darber spricht Christian Krug, der Host des Podcasts Unf*ck Your Data mit Johannes Lierfeld CIO der ESTONTECO.Mit den beginnenden Vernderungen, welche die KI in unseren Alltag bringt wachsen natrlicherweise auch die Hoffnungen wie die Sorgen mit der neuen Technologie.Aber welche ethischen Herausforderungen und Fragen stellen sich mit einer solchen neuen Technik. Denn der kategorische Imperativ von Kant reicht in einer modernen Gesellschaft nicht aus um die Regeln fr eine KI festzulegen.Denn die Bedrohung, dass KI Systeme unsere menschliche Existenz bedrohen besteht. Allerdings nicht so wie die meisten vielleicht denken. Den Ansporn die Menschheit auszulschen wird eine knstliche Intelligenz nicht entwickeln, aber durch einen Bedienfehler kann dies durchaus passieren. Profile: Zum LinkedIn-Profil von Johannes: https://www.linkedin.com/in/dr-karl-johannes-lierfeld-421b39181/Zum LinkedIn-Profil von Christian: https://www.linkedin.com/in/christian-krug/Unf*ck Your Data auf Linkedin: https://www.linkedin.com/company/unfck-your-data Buchempfehlung: Buchempfehlung von Johannes: Our Final Invention  James BarratAlle Empfehlungen in Melenas Bcherladen: https://gunzenhausen.buchhandlung.de/unfuckyourdata Hier findest Du Unf*ck Your Data: Zum Podcast auf Spotify: https://open.spotify.com/show/6Ow7ySMbgnir27etMYkpxT?si=dc0fd2b3c6454bfaZum Podcast auf iTunes: https://podcasts.apple.com/de/podcast/unf-ck-your-data/id1673832019Zum Podcast auf Google: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vdW5mY2steW91ci1kYXRhLw?ep=14Zum Podcast auf Deezer: https://deezer.page.link/FnT5kRSjf2k54iib6 Kontakt: E-Mail: christian@uyd-podcast.com	2024-03-19T00:00:00	2512848	Unf*ck Your Data	Unfuck your data deine Machete im Datendschungel Daten durchdringen immer mehr von uns und unserem Leben. Sowohl privat, als auch beruflich. Fr Unternehmen werden sie vom Wettbewerbsvorteil zum berlebensfaktor.  Wer seine Daten nicht effektiv nutzt, geht unter. Es wird hchste Zeit, das Datenchaos in den Griff zu bekommen. Bei Unfuck your data spricht Christian Krug jeden Donnerstag mit Datenprofis darber, wie du Ordnung in das Datenchaos bringen kannst. So holst du das meiste heraus. Fr Dein Unternehmen, Deine Kundinnen und Kunden, sowie natrlich fr Dich!	Christian Krug	68
95	73ppaSpRGHYh2ta5OhPg0r	AI and Risk: Navigating New Regulations with Brad Hibbert	"Navigating the complex landscape of AI and new regulations in risk management has never been more critical. In this episode, we dive into the intricacies with Brad Hibbert, COO and CSO of Prevalent.net, a leader in third-party risk management. Brad brings unparalleled expertise, discussing the commercial, reputational, and compliance risks AI poses and the strategic importance of developing a robust AI policy for safeguarding against these risks. Whether its enhancing your organization's protective measures or understanding the global regulatory framework, this dialogue is packed with insights that every risk management professional and enthusiast must hear. If you're keen on staying ahead in the rapidly evolving field of risk management, cyber security, and sustainability, you won't want to miss Brad's analytical deep-dive into AI regulations and their impact on third-party risk strategies. For those looking to contribute to the conversation or share their expertise, we invite you to be our guest. Send your email with the subject ""Podcast Guest Suggestion"" to info@globalriskconsult.com, and let's explore the future of risk management together. "	2024-03-18T00:00:00	1727936	Risk Management Show	On The Risk Management show, we discuss the experiences and ideas behind what's working in Risk Management right now in our risky world. You'll hear interviews with risk managers, CEOs and thought leaders who have compelling stories to share, new insights and just conversations with top professionals operating in the trenches to help you do your work better.  Episodes will feature topics such as: risk management, cyber security, credit risk, market risk, governance, fintech, regtech, risk and compliance, AML, fraud and other related topics.	Global Risk Community	100
96	41cfdZ2IZuhXe9gkwOne86	The Threat of AI Regulation with Brian Chau	Brian Chau writes and hosts a podcast at the From the New World Substack, and recently established a new think tank, the Alliance for the Future.He joins the podcast to discuss why hes not worried about the alignment problem, where he disagrees with doomers, the accomplishments of ChatGPT versus DALL-E, the dangers of regulating AI until progress comes to a halt in the way it did with nuclear power, and more. With his background in computer science, Brian takes issue with many of those who write on this topic, arguing that they think in terms of flawed analogies and know little about the underlying technology. The conversation touches on a previous CSPI discussion with Leopold Aschenbrenner, and the value of continuing to work on alignment. Brians view is that AI doomers are making people needlessly pessimistic. He believes that this technology has the potential to do great things for humanity, particularly when it comes to areas like software development and biotech. But the post-World War II era has seen many examples of government hindering progress, and AFF is dedicated to stopping that from happening with artificial intelligence. Listen to the conversation here, or watch the video here. LinksDonate to AFFAFF manifestoBrian on diminishing returns to machine learning, and discussing AI with Marc AndreessenVaswani et al. on transformersLimits of current machine learning techniques Get full access to Center for the Study of Partisanship and Ideology at www.cspicenter.com/subscribe	2024-03-18T00:00:00	4369110	CSPI Podcast	Discussions with CSPI scholars and leading thinkers in science, technology, and politics. www.cspicenter.com	CSPI	68
97	7EjKiMn9HDu3yFzdJ3sX0D	States Take on AI Bias with New Legislation	"Lawmakers in seven states are proposing legislation to address potential bias in artificial intelligence (AI) systems. Currently, there is little government oversight in place to regulate AI bias. The proposed bills aim to require companies to conduct ""impact assessments"" to evaluate how AI contributes to decision-making and analyze the risk of discrimination. However, there are debates over the sufficiency of impact assessments and concerns about transparency. The legislation faces challenges, as similar bills in Washington state and California have faltered or not passed. Regulating AI bias is crucial as AI systems have been found to favor certain races, genders, or incomes, raising concerns about discrimination.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-03-16T00:00:00	188525	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
98	3h2PkIHaDRdStDciv0U2w9	EU Parliament Backs AI Act to Regulate Tech	"The European Union's proposed AI act has been endorsed by the European Parliament and is on track to become law. The act aims to regulate artificial intelligence technology and addresses concerns about its use. It prohibits certain AI systems that pose an ""unacceptable risk,"" but includes exemptions for military, defense, and national security contexts. The legislation introduces a category for ""high-risk"" systems used in critical areas, subject to strict requirements. EU citizens have the right to request explanations about AI system decisions affecting them. Generative AI systems, deepfakes, and chatbots providing public information are also covered. Companies violating the regulations face fines, and a European AI office will be established.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-03-16T00:00:00	233952	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
99	5naUElTnHRQ4DUtVreLu0W	Trudeau Doubles Down on Carbon Tax, Tate Gets Arrested, and AI Regulation | Blendr Report EP33	"In this episode of ""The Blendr Report,"" Jonathan and Liam discuss:  0:00 Intro 0:46 Carbon Tax Madness 10:29 Andrew Tate Goes to Jail 19:59 American TikTok Ban? 34:03 EU Passes AI Regulation  Sign up for raw and uncensored news at www.blendrnews.com  Follow BLENDR News: Twitter - @BlendrNews Instagram - @blendr.report TikTok - @blendrnews  Follow Jonathan: Instagram - @itsjonathanharvey TikTok - @itsjonathanharvey  Follow Liam: Instagram - @liam.out.loud Twitter - @liam_out_loud YouTube - @liam-out-loud   Subscribe to the BLENDR News Podcast on Spotify, Apple Podcast or Google Podcast | open.spotify.com/show/54wJHHTrDE3FgFqBUIFrIq?si=3fe4244965d84a31"	2024-03-15T00:00:00	3107787	The Blendr Report	Where news meets rational thinking.	Jonathan Harvey and Liam DeBoer	50
100	5NPuUCBvZa1xMsxzFmd2Ok	India walks back AI regulations SpaceX 'dishonest' employees stock, SBF may serve 50 years	Today's episode sheds light on the complex interplay between technology, regulation, and ethics. India's decision to reconsider its AI regulatory approach speaks to the global challenge of navigating the rapid advancement of AI technologies while safeguarding public interests. Meanwhile, the scrutiny of SpaceX employees and SBF's legal woes remind us of the importance of integrity and accountability in the tech and finance sectors.For more on these stories:India walks back AI regulationsSpaceX 'dishonest' employees stockSBF may serve 50 yearsPerplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic youre interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on:  Instagram Threads X (Twitter) YouTube Linkedin 	2024-03-15T00:00:00	264097	Discover Daily by Perplexity	Discover Daily by Perplexity is your bite-sized briefing on the latest developments in tech, science, and culture. In a few minutes, each episode curates fascinating stories and insights from Perplexity's Discover feed to enrich your day with knowledge and spark your curiosity.From AI breakthroughs to space exploration, leadership shakeups at tech giants to the societal impact of advancing technologies, Discover Daily keeps you informed on the trends and ideas shaping our future. Leveraging Perplexity's powerful search and ElevenLabs' lifelike AI voice, the podcast transforms how you absorb information on the go.Hosted by Perplexity and featuring an unbiased, thought-provoking perspective, Discover Daily has quickly become a top tech podcast. With new episodes released daily, it's the perfect way to stay plugged into innovation and expand your mind, one captivating story at a time. Subscribe now on your favorite platform.Download our free mobile app at https://pplx.ai/download	Perplexity	99
101	3IYqop5OWX51p3Im2p7DLk	FCC Bans AI-Generated Robocalls After 2020 Election Impersonation	The Federal Communications Commission (FCC) has unanimously ruled to ban robocalls that use artificial intelligence (AI)-generated voices. The ruling is in response to AI-generated robocalls impersonating President Joe Biden during New Hampshire's primary election and discouraging people from voting. Under the existing Telephone Consumer Protection Act, the FCC now has the power to fine companies using AI voices in calls or block service providers that carry them. Recipients can also file lawsuits, and state attorneys general can take action against violators. However, experts warn that personalized spam targeting individuals through phone calls, text messages, and social media may still occur. Some call for clearer identification of AI-generated content to detect abuse of voice technology.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-15T00:00:00	330187	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
102	3MTlaO4CDAJQSodQQlQxms	Haiti Political Turmoil, AI Regulations in EU, AI Ethics in Advertising	Haiti is experiencing instability due to resignations and power struggles, while the European Union has introduced the AI Act to regulate AI usage. In another story, Under Armour is facing criticism for allegedly using others' work without credit in an AI-powered sports commercial, raising ethical questions in the advertising industry. Additionally, investors are showing interest in robotics startups focusing on generative AI and humanoid robots like Agility and Digit, with a shift towards acceptance of humanoids augmenting rather than replacing workers in factories.	2024-03-15T00:00:00	900443	Manish's News	This podcast delves into the intersection of science, politics, technology, and business & finance. From dissecting global news to discussing elections and international relations, the show covers a wide array of topics. Dive into the world of AI, virtual reality, and cloud computing, while also exploring political scandals, tech startups, finance trends, and ethical considerations in artificial intelligence. Whether it's dissecting legislation or analyzing funding rounds in the startup world, this podcast offers a comprehensive look at key issues shaping our modern world.	Manish's PocketPod	27
103	4XZyRBZSr7qjj6PBllHqeT	AI Disinformation Threats, Humanoid Robot Advancements, EU's AI Regulations	Artificial intelligence is transforming the landscape of election disinformation globally, enabling the creation of deceptive content to sway voters and posing challenges to political trust. Additionally, Figure, in partnership with OpenAI, presents a humanoid robot showcasing impressive abilities without remote control, surpassing Tesla's robot showcases. Furthermore, European Union legislators have passed the Artificial Intelligence Act to regulate AI according to risk levels over a two-year period, with penalties reaching up to 7% of annual revenue for non-compliance.	2024-03-15T00:00:00	903621	Shauvik's News	This podcast delves into the intersection of cutting-edge technology, global affairs, and business dynamics. From dissecting the latest advancements in AI like Large Language Models to exploring the impact of Big Tech on society, each episode navigates through the realms of science, politics, and finance. Stay tuned for insightful discussions on Cloud Computing, Open Source AI, and Mergers & Acquisitions while gaining valuable insights into navigating the world of startups and business strategy amidst rapidly evolving world events.	Shauvik's PocketPod	27
104	3H9Lo4k98JD7sfh8hJ1IbJ	AI deepfake creators pose threat to elections	"Artificial intelligence (AI) is being used to create and spread disinformation during elections, posing a significant threat to democratic processes worldwide. With the help of AI technologies, anyone can now create convincing ""deepfakes"" with a simple text prompt. Governments and organizations have started responding to this threat, with the Federal Communications Commission outlawing AI-generated robocalls aimed at discouraging voters in the US, and major tech companies signing an agreement to prevent the use of AI in disrupting elections globally. However, identifying those responsible for creating AI deepfakes is difficult, and governments and companies are struggling to keep up with the deluge of fake content. Efforts to regulate AI deepfakes must also be careful to avoid unintended consequences and protect free speech.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-03-15T00:00:00	310857	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
105	2VjRTE19UPKPvEuwUWrAkD	California businesses facing strict AI regulations to protect privacy	California is advancing regulations on the use of artificial intelligence (AI) by businesses, with the aim of protecting personal privacy. The rules, which apply to companies with annual revenues exceeding $25 million or processing data from over 100,000 Californians, require businesses to notify individuals before using AI and prohibit discrimination against those who opt out of interacting with AI models. The regulations also demand risk assessments by employers or contractors to evaluate the performance of AI technology. However, organizations have voiced concerns about potential loopholes in the rules that could allow companies to evade accountability. The final vote on the rules is expected to take place in about a year.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-03-14T00:00:00	214125	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
106	0GpUUD7MDYZerMVVhbCWDn	AI Regulation in Focus: OpenAI's Sora, EU's Landmark AI Act, and Trump's Misinformation	OpenAI's Sora, an AI video generator, is scheduled for release in 2024 with potential applications across various industries despite worries about deepfake misuse. The European Union's parliament has passed groundbreaking regulations on artificial intelligence, aiming to categorize AI technologies by risk and enforce rules by 2025 to address issues like competition and the abuse of AI, including deepfakes. Additionally, former President Trump wrongly claims that Democrats utilized AI to manipulate videos during a House Judiciary Committee session, highlighting the difficulty of verifying content accuracy as technology progresses.	2024-03-14T00:00:00	931337	Taylor's News	This podcast delves into the intersection of politics, science, business, and global news, exploring topics like international relations, government policies, and global conflicts. From scandals to economic policy and from healthcare to technology advancements like AI and electric vehicles, each episode provides insights into world events and the impact of big tech on our lives. Dive into discussions on venture capital, startups, mergers & acquisitions, and the ever-evolving landscape of the global economy while also exploring the latest gadgets and innovations in virtual reality and the Internet of Things.	Taylor's PocketPod	20
107	2zPoLoWVjBE9p1fHQ8qI2u	"AI-generated ""deepfakes"" threaten integrity of elections worldwide"	"Artificial intelligence (AI) is increasingly posing a threat to elections worldwide by enabling the creation of convincing fake content aimed at deceiving voters. Previously, creating phony photos, videos, or audio clips required technical skills and resources, but with the advent of free or low-cost generative AI services, anyone with a smartphone and a text prompt can create high-quality ""deepfakes"". These deepfakes have already been used in Europe and Asia, manipulating candidates' images and eroding trust in the electoral process. However, efforts are being made to tackle this issue, such as the European Union mandating special labeling of deepfakes and tech companies pledging to prevent AI disruption in elections.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message"	2024-03-14T00:00:00	244741	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
108	4OZVrTFmPdAOFDPTsSJyvB	A New AI Safety Report (and Why the Media Loves the AI Extinction Narrative)	US-funded report sparks media frenzy, advocating for drastic AI measures. Unpacking the actual content versus sensational media narratives on potential AI threats, highlighting the gap between report recommendations and media portrayal.  ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter:https://theaibreakdown.beehiiv.com/subscribe   Subscribe to The AI Breakdown on YouTube:https://www.youtube.com/@TheAIBreakdown   Join the community:bit.ly/aibreakdown   Learn more:http://breakdown.network/	2024-03-13T00:00:00	805485	The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis	A daily news analysis show on all things artificial intelligence. NLW looks at AI from multiple angles, from the explosion of creativity brought on by new tools like Midjourney and ChatGPT to the potential disruptions to work and industries as we know them to the great philosophical, ethical and practical questions of advanced general intelligence, alignment and x-risk.	Nathaniel Whittemore	385
109	0gNDvMkgnGuASVlOMYeGcQ	The ChatGPT Dilemma: EU's Struggle with AI Regulation	In this episode, we delve into the dilemma faced by the EU in regulating AI amidst ChatGPT's disruptive influence, exploring the complexities and controversies surrounding efforts to establish effective oversight mechanisms  Get on the AI Box Waitlist: https://AIBox.ai/ Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/ Follow me on Twitter: https://twitter.com/jaeden_ai  	2024-03-13T00:00:00	481999	Smart Talks AI	"""Smart Talks AI"" is a cutting-edge podcast dedicated to exploring the fascinating world of artificial intelligence and its impact across various industries. Each episode delves into current topics and news in AI, offering insights into the latest developments, trends, and innovations. The show brings together ideas, theories, and discussions from the forefront of AI technology, making complex concepts accessible to a broad audience. Whether you're an AI enthusiast, a professional in the field, or simply curious about the future of technology, ""Smart Talks AI"" is your go-to source for all thing"	Smart Talks AI	102
110	3RQJWpEPHbNzA9nzrIrADN	We Need AI Regulations	Can regulations curb the ethically disastrous tendencies of AI?  David Evan Harris is Chancellor's Public Scholar at UC Berkeley and faculty member at the Haas School of Business, where he teaches courses including AI Ethics for Leaders; Social Movements & Social Media; Civic Technology; and Scenario Planning & Futures Thinking. He is also a Senior Fellow at the Centre for International Governance Innovation; a Senior Research Fellow at the International Computer Science Institute; Visiting Fellow at the Integrity Institute; a Senior Advisor for AI Ethics at the Psychology of Technology Institute.  He previously worked as a Research Manager at Meta (formerly Facebook) on the Responsible AI, Civic Integrity and Social Impact teams. Before that, he worked as a Research Director at the Institute for the Future. He was named to Business Insiders AI 100 list for his work on AI governance, fairness and misinformation. He has published a book and numerous articles in outlets including The Guardian, BBC, Tech Policy Press and Adbusters. He has been interviewed and quoted by CNN, BBC, AP, Bloomberg, The Atlantic, and given dozens of talks around the world.	2024-03-07T00:00:00	2822186	Ethical Machines	I talk with the smartest people I can find working or researching anywhere near the intersection of emerging technologies and their ethical impacts.  From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If its coming down the tech pipeline (or its here already), well pick it apart, figure out its implications, and break down what we should do about it.	Reid Blackman	38
111	1eMdzHwb805HNF4D43tIaJ	US AI Regulation and Standards	A point in time look at AI regulation and standards in the US through a corporate lens. We cover the need for regulation (including deep-fakes and bias), US federal and state activity, the EU AI Act, US and international standards and the potential for further worker-led protections following SAG-AFTRA. We wrap up discussing the implications for enterprise tech strategy.   Timestamps [00m:43s] Why regulate? [03m:17s] US Federal Regulation & the Executive Order on AI [05m:50s] The AI Bill of Rights & State-level Legislation [08m:00s] Deep fake threats [13m:38s] EU AI Act [14m:54s] AI Standards [17m:00s] Regulated Industries [19m:40s] SAG-AFTRA & workers rights [21m:50s] Implications	2024-03-06T00:00:00	1429215	Enterprise Technology Strategy	In this podcast series we aim to cover topics that provide the big picture for people involved with Enterprise Technology Strategy or people interested in longer term tech investment strategies. We'll range from macro level topics like the changing world order and environmental crises to specific technology stacks including AI and quantum.   Content is structured by humans and augmented by GenAI. Voices are wholly AI generated.	Chris Allison	4
112	0RU8zLMGoNUnuBLSNr5Bik	Reddit's IPO Filing & India's AI Regulation | EP 7	In this episode, we dissect Reddit's IPO filing, diving into critical aspects outlined in their S-1 - spanning across user growth, advertising, and data licensing to AI model providers. In the second half, we shift focus to India's recent contentious policy requiring government approvals for all AI model providers and applications before launch.Follow us and join the conversation - every week, Viggy & James provide research-based deep dives on the latest news, company strategies and trends in tech.Timestamps:(00:00) Introduction(01:36) Tee up - Reddit IPO(03:52) What makes Reddit a great product(10:17) User growth outlook - how much more room exists?(13:59) Investment in their Ads product(22:58) Data Licensing to AI companies(28:35) Tee up - India's controversial AI regulation(30:36) Why this is happening now, election year, Indian political landscape & election interference risks(34:16) Who is impacted by this regulation(40:55) India's history with a licensing regime, regulatory capture risks(42:44) What is the right mechanism to regulate	2024-03-05T00:00:00	2890684	Unpacked Podcast - Weekly Tech Deep Dive	Dive deeper into the world of technology with Unpacked, where every week, Viggy Balagopalakrishnan & James Watterson bring you more than just news. They dissect the layers of tech company strategies, product updates, AI developments, and policy shifts with a critical eye. Built on the belief that informed opinions are shaped by understanding the full story, Unpacked delivers research-driven insights, free from surface-level summaries and one-sided opinions.With their extensive experience in building technology products, our hosts offer listeners a unique perspective on the 'how' and 'why' behind the latest tech news. Follow Unpacked for a no-frills, nuanced conversation on the most pressing tech topics of our time.	Viggy Balagopalakrishnan, James Watterson	9
113	7DmoaKQrzzPSBaPhDpSu2C	Blur in the Lines: The 32% Conundrum of Distinguishing AI from Human, and the New AI Labeling Regulations Unveiled by US and EU	In this episode, we explore the challenges of discerning between AI and human interactions, as revealed by a surprising 32% statistic. Additionally, we delve into the implications of the recently announced AI labeling regulations jointly declared by the United States and the European Union.     Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/   AI Facebook Community   Learn more about LLMs    Learn more about AI      	2024-03-03T00:00:00	706629	The Joe Rogan Experience of AI	"""The Joe Rogan Experience of AI"" delves deep into the world of artificial intelligence, mirroring the style of the legendary Joe Rogan. Join us as we engage with experts, discuss the latest AI news, and explore the fascinating intersection of technology and human experience, all in the same conversational and insightful format as Joe Rogan."	The Joe Rogan Experience of AI	451
114	1KaeKc6OAdEirBfvcR6yTN	AI Regulation and Ethics: Major Updates	Could AI regulation be the most crucial conversation of our decade? Explore this paramount issue with us as we dissect the challenging landscape of ethical AI, guided by the sharp insights of tech visionaries like Elon Musk and Samuel Altman. Together, we unravel the complexities of AI's ethical and social implications, safety standards, privacy concerns, and the labyrinth of accountability and bias. Our conversation promises to arm you with an essential understanding of the pressing need for clear guidelines, as AI inexorably integrates itself into the fabric of our global society.In our latest episode, we journey through the world's AI policy battlegrounds, where the EU, the U.S., and China are crafting the blueprints that will define the future of AI governance. With the European Union potentially setting the tone with its risk-based categorization, the U.S. balancing innovation with self-regulation, and China offering an alternative viewpoint on tech control, we provide a comprehensive analysis of the global AI regulatory environment. Recognizing the imperative for individuals and companies to stay informed and agile, we discuss the importance of ethical AI initiatives and share stories, like that of the student caught off guard by Grammarly's repercussions, to illustrate the transformative power of AI literacy. Join us to ensure you're equipped for the AI revolution reshaping our professional and personal worlds.	2024-03-01T00:00:00	1854893	Automate Innovate	If you can never get enough of business automation... Congratulations, you've found your people.Dive in with Alex Astafyev and Kendall E. Matthews, to hear the trends, tech tips, and visionary influencers bringing innovation to automation.	Automate Innovate 	19
115	72pbMkMeYLQHtgdhHk7OR4	AI: risks, rewards and regulations	"We look at whether artificial intelligence can outperform human analysis of big data, what it might bring to the field of weather forecasting, and what policymakers are doing to regulate this rapidly developing technology. We're joined by Mark McDonald, Head of Data Science and Analytics, Yaryna Kobel, Corporate Governance Analyst, and Amy Tyler, ESG Analyst.For more content from HSBC Global Research, follow us on LinkedIn: #HSBCResearch. And don't forget to follow our Asia-centric podcast ""Under the Banyan Tree"" on Apple Podcasts or Spotify or wherever you get your podcasts.Email us at AskResearch@hsbc.com for any questions.Disclosures and Disclaimers: https://www.research.hsbc.com/R/61/NPdRDRP Hosted on Acast. See acast.com/privacy for more information."	2024-03-01T00:00:00	909923	The Macro Brief by HSBC Global Research	Listen to our weekly updates on the key macro issues influencing developed and emerging markets around the world. Our economists strategists and analysts share their insights, views and ideas to help guide your outlook. Subscribe now.  For more from HSBC Global Research, email us at askresearch@hsbc.com Stay connected and access free to view reports and videos from HSBC Global Research follow us on LinkedIn https://www.linkedin.com/feed/hashtag/hsbcresearch/or click here: https://www.gbm.hsbc.com/insights/global-research. Hosted on Acast. See acast.com/privacy for more information.	HSBC Global Research 	49
116	5G9hZWyQbEDm9OK3Sw0gvI	Disentangling Perception: AI Labeling Regulations Explored	In this episode, we disentangle the complexities of AI perception, with 32% unable to discern AI from humans, and explore the implications of the US and EU's coordinated approach to implementing AI labeling regulations.    Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about AI in Music    Learn more about AI Models       	2024-02-29T00:00:00	669570	AI for Humanity	"""AI for Humanity"" is a thought-provoking podcast dedicated to exploring the fascinating world of artificial intelligence and its impact on our daily lives. Each episode delves into a range of topics and current news, offering insights into how AI is shaping our future. From technological advancements to ethical dilemmas, the podcast provides an engaging platform for listeners to understand AI's role in society. Join us as we navigate the ever-evolving landscape of artificial intelligence."	AI for Humanity	340
117	0kwUblqZvZaRRwWKtjAi6n	The Deception Dilemma: Examining AI Labeling Regulations	In this episode, we examine the deception dilemma posed by AI perception challenges, with 32% unable to distinguish, and assess the significance of the US and EU's collaborative efforts in introducing AI labeling regulations.    Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about AI in Music    Learn more about AI Models       	2024-02-29T00:00:00	669570	Your Undivided AI Attention	"""Your Undivided AI Attention"" is a captivating podcast dedicated to exploring the rapidly evolving world of artificial intelligence. Each episode delves into the latest developments, breakthroughs, and ethical considerations surrounding AI, providing listeners with in-depth insights and analyses. Our team covers a diverse range of topics, from advancements in machine learning to the societal impacts of AI technologies. Stay informed and engaged with the most current AI news and discussions, all in one place."	Your Undivided AI Attention	439
118	0P1urZrCDJwHmq4LH9XjoV	Dispelling Myths: AI Labeling Regulations and Perception Realities	In this episode, we dispel myths surrounding AI perception challenges, with 32% unable to discern AI, and explore the implications of the US and EU's collaborative approach to implementing AI labeling regulations.    Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about AI in Music    Learn more about AI Models       	2024-02-29T00:00:00	669570	Hugging Face	"""Hugging Face"" is an engaging podcast dedicated to exploring the latest trends and developments in the world of technology and artificial intelligence. Each episode delves into a variety of topics, ranging from machine learning advancements to the ethical implications of AI, providing listeners with insightful analysis and up-to-date news. The show offers a platform for lively discussions and in-depth coverage of the most pressing issues in the tech industry today. Tune in to ""Hugging Face"" for a thought-provoking journey into the future of technology."	Hugging Face	360
119	58Cx4ENCcNUgMwRIAyN5RE	Illuminating Blind Spots: AI Labeling Regulations Revealed	In this episode, we illuminate blind spots in AI perception, including the 32% unable to distinguish, and discuss the implications of the US and EU's collaborative efforts in introducing AI labeling regulations.    Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about AI in Music    Learn more about AI Models       	2024-02-29T00:00:00	669570	The World and Everything In It AI	"""The World and Everything In It AI"" is a cutting-edge podcast dedicated to exploring the latest developments and topics in artificial intelligence. Each episode dives deep into current trends, breakthroughs, and the impact of AI on various industries and society. Our knowledgeable hosts engage in thought-provoking discussions, breaking down complex concepts into accessible insights for a wide range of listeners. Stay informed and join us as we navigate the evolving landscape of AI, from ethical considerations to technological advancements."	The World and Everything In It AI	403
120	6up2CXPBh2czdfdZ5eNz7P	Warning from European VCs and CEOs: AI Faces Over-Regulation Threat	Dive into the concerns raised by European VCs and CEOs as they sound the alarm on the looming threat of over-regulation in the AI sector. Explore the implications for innovation and the evolving landscape of artificial intelligence in the European business sphere. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai   	2024-02-19T00:00:00	582635	Artificial Intelligence: AI News, ChatGPT, OpenAI, LLM, Anthropic, Claude, Google AI	Artificial Intelligence is a podcast exploring the world of AI and its impact on society. Join us as we delve into the latest developments, breakthroughs, and controversies in the field. Each episode, we'll hear from experts, practitioners, and thinkers who are shaping the future of AI. Get a better understanding of how this rapidly growing technology is changing the way we live and work, and what its future might hold. Tune in to Artificial Intelligence, where technology meets humanity.	Jaeden Schafer	518
121	4V4o7zczfRwu6ZuhsemfRU	Euro Tech Visionaries on Alert: AI Over-Regulation in the Spotlight	In this episode, we shine a spotlight on the concerns of European VCs and CEOs, who are vigilant about the specter of AI over-regulation. Join the dialogue as we navigate the evolving landscape of AI governance and its implications for the European tech industry.    Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/   AI Facebook Community   Learn more about AI in Music    Learn more about AI Models  	2024-02-12T00:00:00	619694	Midjourney	"""Midjourney"" is a dynamic podcast that delves into a wide array of AI topics and the latest news, offering insightful analysis and engaging discussions. Our hosts bring their unique perspectives and deep understanding to AI, creating an informative and thought-provoking listening experience. Tune in to ""Midjourney"" to stay informed, challenged, and entertained, as we navigate through the complexities and wonders of AI."	Midjourney	359
122	5yjP35RLzjEh8NbiUYDMna	AI Over-Regulation Concerns: European VCs and CEOs Raise the Alarm	In this episode, we explore the growing concerns voiced by European VCs and CEOs about the potential over-regulation of AI, delving into the reasons behind their alarm and examining the potential impact on innovation.     Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/   AI Facebook Community   Learn more about AI in Video    Learn more about Open AI      	2024-02-04T00:00:00	619694	The Joe Rogan Experience of AI	"""The Joe Rogan Experience of AI"" delves deep into the world of artificial intelligence, mirroring the style of the legendary Joe Rogan. Join us as we engage with experts, discuss the latest AI news, and explore the fascinating intersection of technology and human experience, all in the same conversational and insightful format as Joe Rogan."	The Joe Rogan Experience of AI	451
123	7GXx3jvyoyzIPGiEaunFRC	European VCs and CEOs Raise Red Flags: AI Faces Threat of Over-Regulation	Red flags are raised as European VCs and CEOs express concerns over the looming threat of over-regulation in the AI sector. Dive into the discussions surrounding the potential impact on AI innovation and the delicate balance between regulation and progress. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai   	2024-02-02T00:00:00	582635	ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI	The ChatGPT podcast is a cutting-edge podcast exploring the intersection of artificial intelligence and technology. Each episode delves into how AI is revolutionizing the industry, from personalization to data analysis, featuring insights from leading experts and real-world case studies.	ChatGPT	492
124	2eNF2TYiLi9HxxzVIXsrmx	Thu. 02/01  A Tech Regulation Tipping Point?	Why I think yesterdays Congressional hearings might actually be a tipping point for tech regulation. What would a Kids Online Safety Act actually mean? More proof of YouTubes dominance. Celsius and FTX customers are about to get some money back. And Googles new text to image AI processor.Sponsors:Kolide.com/rideLinks:Microsoft, X throw their weight behind KOSA, the controversial kids online safety bill(TechCrunch)Americans Social Media Use(Pew Research Center)Celsius to Distribute $3B Crypto to Creditors as Firm Emerges From Bankruptcy(CoinDesk)Google launches an AI-powered image generator(TechCrunch)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.	2024-02-01T00:00:00	1132643	Techmeme Ride Home	The day's tech news, every day at 5pm. From Techmeme.com, Silicon Valley's most-read news source. 15 minutes and you're up to date.	Ride Home Media	1799
125	505A8pbMWnTeMCGKHmHsXU	Thu. 02/01  A Tech Regulation Tipping Point?	   Why I think yesterdays Congressional hearings might actually be a tipping point for tech regulation. What would a Kids Online Safety Act actually mean? More proof of YouTubes dominance. Celsius and FTX customers are about to get some money back. And Googles new text to image AI processor.Sponsors:Kolide.com/rideLinks:  Microsoft, X throw their weight behind KOSA, the controversial kids online safety bill (TechCrunch)  Americans Social Media Use (Pew Research Center)  Celsius to Distribute $3B Crypto to Creditors as Firm Emerges From Bankruptcy (CoinDesk)  Google launches an AI-powered image generator (TechCrunch) See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info. 	2024-02-01T00:00:00	989779	Techmeme Ride Home+ Premium Feed ()	The ad-free and bonus-episode premium feed of the Techmeme Ride Home podcast.	Techmeme	1546
126	1ta60IK9pGsqzgt2uJMueA	Innovation at Crossroads: AI Public Opinion Survey Signals Distrust, Paves the Way for Regulation	Witness innovation at crossroads as an AI public opinion survey signals widespread distrust, paving the way for regulatory measures. Join this episode for a comprehensive exploration of the survey's impact on innovation, the dynamics of distrust, and the evolving landscape of AI governance.  #InnovationCrossroads #AISurveyImpact  Get on the AI Box Waitlist: https://AIBox.ai/ Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/ Follow me on Twitter: https://twitter.com/jaeden_ai   	2024-01-30T00:00:00	685127	UiPath Daily	UiPath Daily is a cutting-edge podcast that explores the dynamic world of artificial intelligence and its transformative impact on business leadership and management strategies. Each episode dives into the latest AI trends, offers insights from industry experts, and provides practical advice for managers looking to integrate AI into their decision-making processes.	UiPath Daily	479
127	0rcxtf7Y1kTmbkY9a2SGgB	AI Poll Insights: Unveiling Public Distrust in Tech Firms and the Call for Regulation	In this episode, we unveil the insights gathered from an AI poll, highlighting the widespread distrust of tech firms among the public and the resonating call for regulatory measures.   Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn About ChatGPT    Learn About AI at Tesla  	2024-01-21T00:00:00	722187	Runway AI	"""Runway AI"" is a dynamic podcast that delves into the latest developments and trends in artificial intelligence and technology. Each episode features in-depth discussions on a wide range of topics, from cutting-edge AI research to the impact of technology on society. Our team explores the latest news, breakthroughs, and applications of AI, providing listeners with insightful and thought-provoking content. Join us on ""Runway AI"" to stay informed and engaged with the ever-evolving world of artificial intelligence."	Runway AI	361
128	3JWrqS6ZbU8oS4naKp1rT6	The Regulation of Artificial Intelligence	New Technologies can have, and already is having, a profound impact on our society. However, the new technologies can also be used to malicious ends or have unintended negative consequences. In this context, AI techologies embodies this duality perhaps more than any other emerging technology today. Thus, any exploration of the use of AI technologies must always go hand-in-hand with efforts to avoid any form of legal abuse. In this podcast, Abdessalam Jaldi, Senior International Relations Specialist at the PCNS engages in a conversation with Rene Cummings, AI ethicist at The School of Data Science - University of Virginia.	2024-01-19T00:00:00	998608	Policy Center for the New South Podcasts	The Policy Center for the New South, formerly OCP Policy Center, is a Moroccan Think Tank with the goal of bridging between policy making and research.	Policy Center for the New South	373
129	53moAC1aDakUcYTYKP592E	Fear and Risks of Artificial Intelligence in Finance, Law, and Beyond	Growing anxiety about AI spreads to finance and law. Organizations like FINRA and the World Economic Forum raise alarms about AI's risks, including misinformation and propaganda. Concerns include biased financial decisions, market meltdowns, privacy invasion, and overreliance on AI. Transparency and stress testing are lacking. OpenAI's GPT-3.5 launch intensifies the debate. Policymakers grapple with AI's societal role. Smaller companies accuse major players of triggering regulations.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2024-01-17T00:00:00	254846	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
130	2I0fodpdO0JUprJU58rxXU	Delving Into AI Ethics, Safety and Global Regulations with Stuart Russell	On this episode, I'm delighted to be joined by a leading mind in AI, Stuart Russell, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Chair in Engineering; Director of the Center for Human-Compatible AI; Author of Artificial Intelligence: A Modern Approach, which is currently part of the curriculum in 1,500 universities in 135 countries and translated into 20 languages.Our conversation ventures into the depths of AI's potential, its impact on society and the critical role of legislation in shaping a safe and prosperous AI-powered future.Key Takeaways:(00:56) Introduction of Professor Stuart Russell and his significant contributions to AI.(02:22) Analysis of the Biden Executive Order on AI and its limitations.(03:49) Evolution and current status of the EU AI Act.(07:31) The paradox of open-source AI in regulatory contexts.(08:31) The challenge of controlling AI systems that are more powerful than humans.(13:08) The necessity of proactive safety measures in AI development.(15:12) The potential risks and concerns around AI agents.(17:02) Balancing innovation and regulation in AI.(19:20) Adapting AI legislation to technological advancements.(21:49) The need for a dedicated regulatory agency for AI.(26:08) Global collaboration on AI safety and national security.(30:33) Public perception and education on AI safety.(34:23) The role of AI in national security and ethical concerns.(37:04) The impact of AI and deepfakes on the 2024 elections.Resources Mentioned:Stuart Russell - https://www.linkedin.com/in/stuartjonathanrussell/President Bidens Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard	2024-01-16T00:00:00	2300891	Regulating AI: Innovate Responsibly	Welcome to the Regulating AI: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.  You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.  Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!	Sanjay Puri	37
131	7KHJ9bPFfGEiIFOsSbP98v	32% Caught in the Illusion: AI and Human Boundaries Blur - US and EU Roll Out AI Labeling Regulations	Peer into the illusionary space where 32% find it challenging to discern AI from humans. Join us on a journey as the US and EU introduce groundbreaking AI labeling regulations, demystifying the world of artificial intelligence. Get on the AI Box Waitlist: https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai   	2024-01-13T00:00:00	669570	UiPath Daily	UiPath Daily is a cutting-edge podcast that explores the dynamic world of artificial intelligence and its transformative impact on business leadership and management strategies. Each episode dives into the latest AI trends, offers insights from industry experts, and provides practical advice for managers looking to integrate AI into their decision-making processes.	UiPath Daily	479
132	0iE5n8VFtseeVMpxZJ7yuV	Euro Tech Leaders Warn of AI Over-Regulation: Unpacking the Concerns	In this episode, we unpack the concerns expressed by European VCs and CEOs regarding the looming threat of over-regulation in the AI space. Join the discourse on striking the right balance between governance and fostering innovation in the European tech ecosystem.   Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about AI in Music    Learn more about AI Models  	2024-01-12T00:00:00	619694	Runway AI	"""Runway AI"" is a dynamic podcast that delves into the latest developments and trends in artificial intelligence and technology. Each episode features in-depth discussions on a wide range of topics, from cutting-edge AI research to the impact of technology on society. Our team explores the latest news, breakthroughs, and applications of AI, providing listeners with insightful and thought-provoking content. Join us on ""Runway AI"" to stay informed and engaged with the ever-evolving world of artificial intelligence."	Runway AI	361
133	3R7CJ4Jw0MfdIPEbtcBk2K	Evolving Approaches to Competitor and Policy Fraud with AI - with Caitlin Hodges of Amazon	Todays guest is Caitlin Hodges, Risk Manager at Amazon. Caitlin joins us on todays program to talk about the biggest challenges for eCommerce leaders when it comes to reducing policy fraud, and the unique forms their solutions can take from a data perspective. Throughout the episode, Caitlin pulls from her experience from the frontlines of developing new tools to fight competitor fraud, such as detecting AI-manufactured negative reviews for competing products. This episode is sponsored by Riskified. Learn how brands work with Emerj and other Emerj Media options atemerj.com/ad1.	2024-01-10T00:00:00	1157306	The AI in Business Podcast	"The AI in Business Podcast is for non-technical business leaders who need to find AI opportunities, align AI capabilities with strategy, and deliver ROI.  Each week, Emerj Artificial Intelligence Research CEO Daniel Faggella interviews top AI executives from Fortune 500 firms and unicorn startups - to uncover trends, use-cases, and best practices for practical AI adoption.  Subscribe to Emerj's weekly AI newsletter by downloading our ""Beginning with AI"" PDF guide: https://emerj.com/beg1"	Daniel Faggella	821
134	5T7lpzaR1Cz70zaaqPcr7B	AI Labeling Regulations Unveiled: 32% Struggle to Discern Human from Machine	In this episode, we unveil the challenges of discernment as 32% of individuals grapple with distinguishing AI from human interactions. Additionally, we explore the recent revelations from the US and EU introducing AI labeling regulations, examining the regulatory landscape and its potential effects on the AI perception gap.    Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about AI in Music    Learn more about AI Models  	2024-01-10T00:00:00	706629	Midjourney	"""Midjourney"" is a dynamic podcast that delves into a wide array of AI topics and the latest news, offering insightful analysis and engaging discussions. Our hosts bring their unique perspectives and deep understanding to AI, creating an informative and thought-provoking listening experience. Tune in to ""Midjourney"" to stay informed, challenged, and entertained, as we navigate through the complexities and wonders of AI."	Midjourney	359
135	3LDhvy5H4512hY3JYk2WBI	AI Deepfakes and Fading Social Media Safeguards: A Perfect Storm for Election Misinformation	As artificial intelligence deepfakes become increasingly accessible and social media companies shift their priorities, experts warn that the upcoming presidential election may face an unprecedented wave of misinformation, potentially distorting voters' perceptions. While the digital landscape evolves, social media platforms like X (formerly known as Twitter), continue to accommodate the spread of falsehoods, stoking fears about the integrity of future election processes.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-12-31T00:00:00	235988	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
136	50ITKRcsziDx30h2vgxRpq	Ignacio Cofone: Redrawing Privacy Lines with AI Regulation	The podcast team is excited to announce the McGill Artificial Intelligence Societys 2nd podcast episode of the 2023-2024 school year featuring Ignacio Cofone.Ignacio Cofone is a professor at McGills Faculty of Law and is the Canada Research Chair in AI Law and Data Governance. With the rapid evolution of artificial intelligence in recent years, there are also many concerns at the forefront. This episode tackles current topics like the effects of bias in AI, the intersection between AI and privacy law, as well as AI regulations. We will examine how the transition between the current privacy regulation and the new Bill C-27 addresses these concerns and also its potential blind spots, while covering the challenges and uncertainties around Canadas first comprehensive attempt at AI regulation.The McGill AI podcast is available on Apple Podcast and Spotify.	2023-12-30T00:00:00	2698762	McGill AI Podcast	There are very few places in the world like Montreal and McGill which have such a concentration of talent in the field of AI/ML. MAIS primarily serves to build a community with a shared passion for the field, spreading knowledge and resources to help aid people trying to enter the AI ecosystem. Our podcast aims to promote the research and share the experiences of people who are making remarkable contributions to the development of AI across disciplines and to allow others to use that information to break into the field while being more aware of the challenges and opportunities. We hope to foster an accessible, holistic resource to understand how AI is evolving and continuously changing the world around us.	Alexandre Lamarche, Catherine Fontaine, Ramatoulaye Balde, Antoine Paradis	13
137	5SYlxUMrpcKKodHcTdORjo	Equality in Algorithms: Reshaping AI for the Better	"In this episode of ""A Beginner's Guide to AI,"" Professor GePhardT takes listeners on an enlightening journey into the world of biases in AI. We explore the origins of these biases, their societal and ethical implications, and the innovative strategies employed to mitigate them. From defining AI bias to examining real-world case studies, particularly in healthcare, this episode provides a comprehensive understanding of the challenges and solutions in creating fair and equitable AI systems. The discussion delves deep into fairness-aware machine learning and the critical role of data audits, offering listeners a nuanced perspective on this pressing issue in the AI community.  This podcast was generated with the help of ChatGPT and Claude 2. We do fact check with human eyes, but there still might be hallucinations in the output.  Music credit: ""Modern Situations"" by Unicorn Heads."	2023-12-30T00:00:00	755053	A Beginner's Guide to AI	"""A Beginner's Guide to AI"" makes the complex world of Artificial Intelligence accessible to all. Each episode breaks down a new AI concept into everyday language, tying it to real-world applications and featuring insights from industry experts. Ideal for novices, tech enthusiasts, and the simply curious, this podcast transforms AI learning into an engaging, digestible journey. Join us as we take the first steps into AI!  Want to get your AI going? Get in contact: dietmar@argo.berlin"	Dietmar Fischer	113
138	5fimAstOPU6l5Cj6ErMye5	Snoozefest Alert! But necessary? AI Regulation in 2024: A Balancing Act for Marketers	This is the first pilot episode of The Daily Dose of Digital podcast, sharing thoughts from the original article here. There is absolutely no doubt that 2023 was the year of AI, and in-particular Generative AI (so much so, I wrote an e-book about it which you can get here). However, as we step into 2024, the conversation around Artificial Intelligence Regulation has intensified worldwide. In this final Daily Dose of Digital for 2023, I'll share some thoughts on why I think AI Regulation will be one of the biggest talking points next year. This surge in regulatory interest stems from AI's pervasive expansion into high-stakes areas like healthcare, finance, and criminal justice, coupled with its potential for misuse in surveillance, discrimination, and even warfare. The lack of public trust, driven by concerns about AI's transparency, accountability, fairness, safety, and privacy, is prompting countries to consider or introduce stringent AI regulations. As marketers, at the heart of innovation and consumer engagement, we find ourselves at a critical junction where understanding and adapting to these regulations is not just a legal necessity but a strategic imperative.Read the full article here: https://www.linkedin.com/pulse/snoozefest-alert-necessary-ai-regulation-2024-balancing-james-gray-tdume/  Hosted on Acast. See acast.com/privacy for more information.	2023-12-30T00:00:00	560352	The Daily Dose of Digital	The Daily Dose of Digital Podcast is here to share exciting news stories, traverse trends, navigate through the noise and dive in to the dynamic world of digital marketing and tech. This podcast is hosted by James Gray who, with nearly 25 years experience in marketing, over 20 of those spent with a digital focus, has witnessed firsthand the industry's evolution and has, in some way, shape or form, experienced most of the technological advancements and game-changing innovations.James has worked both in-house and agency digital marketing leadership roles, where his focus is on partnering with clients to create impactful omni-channel digital experiences, merging creativity with technology to deliver exceptional results and drive business growth, embracing the ethos of being more than just a service provider, but a true partner in innovation.This podcast is an extension of that mission. Here, we'll explore topics like AI regulation in marketing, the transformative power of composable DXPs, and the intricacies of SEO and UX design. Each episode is a blend of headline news, analysis, practical advice, and insights from across the digital marketing space.Whether you're a seasoned marketer, a tech enthusiast, or someone curious about the digital world, this podcast is for you. We're here to stay on top of the complexities of digital, one episode at a time. So join us on this journey and let's dive into the fascinating world of digital marketing together.  Hosted on Acast. See acast.com/privacy for more information.	James Gray	5
139	0b5kghpMHoB8O1wNpivS9n	Rethinking AI Regulation in the EU: The ChatGPT Factor	In this episode, we explore how ChatGPT has forced the European Union to rethink its AI regulatory framework. We examine the challenges and opportunities this presents for future AI governance in Europe.     Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about LLMs    Learn more about AI      	2023-12-28T00:00:00	519057	Rich On Tech AI	"""Rich On Tech AI"" offers an engaging platform where host Marques Brownlee dives deep into the latest trends and developments in technology and digital culture. Each episode explores a variety of current topics and news, providing insightful analysis and discussions to keep listeners at the forefront of the tech world. With a focus on emerging innovations and the impact of technology on everyday life, this podcast is a must-listen for tech enthusiasts and casual listeners alike. Join Marques as he navigates the ever-evolving digital landscape."	Rich On Tech AI	321
140	5D9N3Hz75PDeprXNTfAOUt	AI Rebellion: ChatGPT's Impact on EU Regulations and the Unforeseen Consequences	In this episode, we delve into the unforeseen consequences of ChatGPT on the European Union's plans to regulate AI. Join us as we discuss the challenges posed by advanced AI models and their implications for regulatory frameworks.     Invest in AI Box: https://Republic.com/ai-box   Get on the AI Box Waitlist: https://AIBox.ai/    AI Facebook Community    Learn more about LLMs    Learn more about AI      	2023-12-28T00:00:00	519057	Morning Wire AI	"""Morning Wire AI"" delves into the latest trends and pivotal topics shaping our world today. Each episode presents a deep dive into current news, emerging technologies, and groundbreaking developments across various fields. Our engaging discussions aim to enlighten and inspire, offering listeners a thoughtful perspective on the issues that matter most. Tune in to stay informed, challenged, and connected with the pulse of the modern age."	Morning Wire AI	450
141	7gtkq1bveFyW4n9d6Yk3lt	The Rise of AI in Judicial Systems: A Double-Edged Sword	AI is increasingly incorporated into judicial systems globally, with applications from risk assessments in bail/parole decisions to predictive policing. While AI promises efficiency improvement and potential reduction in human bias, concerns around data bias, algorithmic inaccuracies, and ethical implications like potential violation of due process, lack of transparency, and systemic bias perpetuation also emerge. Hence, the establishment of unbiased, valid data, transparent algorithms, and accountability is crucial as we step into this new era of AI-driven legal proceedings.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-12-26T00:00:00	736468	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
142	0zvOiQv0NshJ5v0GHsLtK1	The Power of AI in Combating Fake News: A Game-Changer for Industries	AI applications are increasingly being trained to detect and filter out fake news, enhancing accuracy and speed in distinguishing between truth and fallacy. By analyzing content structure, language usage, and data, AI can flag unreliable stories across platforms like Facebook and Twitter. Moreover, AI is utilized in politics, finance, and healthcare to prevent the circulation of misleading information. The use of Natural Language Processing (NLP), credibility assessment, and pattern detection allows AI to comprehend human language cues, evaluate news source credibility, and identify signs of fake news. Multinational corporations, news agencies, and government entities leverage these AI technologies to maintain trust and reject misinformation.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-12-24T00:00:00	585278	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
143	04LKrcDfHrq4Wcvf1oJN29	AI Regulations | Unleashing AI	The emergence of Generative AI has positioned AI as a focus for policymakers around the world. However, the regulatory path taken so far has varied. Given the stakes involved, we believe policy and governance evolution will play a defining role. Read the full report at Citi GPS. Learn more about your ad choices. Visit podcastchoices.com/adchoices	2023-12-21T00:00:00	220029	Citi GPS: Global Perspectives and Solutions	In some ways, the future is already here. On Citi GPS: Global Perspectives and Solutions, a new podcast from Citi, well dive into the economic trends and drivers of leading innovation to help you embrace the changes on the horizon. From new vertical farming to AI, well give you the insights to navigate in a fast-changing and interconnected world. Read the full reports at Citi GPS.	SpokenLayer	21
144	2DzICyhUu35NPo6367fIme	Regulation of Artificial Intelligence	Discuss AI in various settings such as the workplace, education, and relationships! We discuss our own opinions based on research we conducted and encourage our viewers to educate themselves through our podcast.	2023-12-21T00:00:00	1166918	Amritha Podcast	Podcasting and Storytelling 2023	DUSD-Amritha Naidu	3
145	18GdunPtBceEJVJ0lm5H1e	AI in the newsroom: Opportunities, regulation and risk	In this last episode of Media Voices in 2023 we take a big picture look at AI, based off our recent collaboration with Media Makers Meet on their Mx3 AI conference. We hear from experts from Immediate Media, Ipsos, the News Media Association and more, about where they are placing their chips to take advantage of the fastest-moving area of media. This holistic look at AI in the newsroom has been split into two parts. In this first part we set the scene for AI and its use in publishing, as experts tell us how to prepare for internal and external changes to media businesses. The second part - coming in the new year - is comprised of case studies from publishers already getting their hands dirty with AI tech. Media Voices and Media Makers Meet would like to thank FT Strategies, InsurAds, Labrador CMS, Miso, Sub(x) and Zuora for sponsoring the conference.	2023-12-18T00:00:00	2356163	Media Voices Podcast	Media Voices is a weekly look at all the news and views from across the media world, featuring leading figures from media and publishing businesses. The team behind the podcast take a common sense approach to media analysis, from the practices of journalism to deep dives into publisher business models.	Media Voices	100
146	29jlX6Jgb9HVrurUiGrV5d	Systemic Regulation of Artificial Intelligence	How should the law regulate artificial intelligence? What risks deserve our attention?  An AI narrated article, written by Yonathan Arbel, Matthew Tokson, and Albert Lin, forthcoming in the Arizona State Law Journal (2024)	2023-12-18T00:00:00	7899543	The Arbel Files -- Legal Scholarship in Podcast Form	Legal articles on a variety of topics in AI, contracts, and defamation law, written by Professor Yonathan A. Arbel, and narrated using AI tools.	Ybell	2
147	5deEmyUj7NzsoeRqoFOZTa	States Urged to Take Action Against Deepfakes in Politics as 2024 Elections Approach	Only three states enacted laws in 2023 addressing the threat of AI and deepfakes in political campaigns, despite increasing recognition of the potential dangers they pose. There are fears that the upcoming 2024 elections could be marred by disinformation and deepfake videos, leading to calls for more states to enact legal protections. However, obstacles such as reconciling potential regulations with First Amendment rights, the rapid development of AI technology, and enforcement complications have resulted in slow progress.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-12-17T00:00:00	200638	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
148	20kobDeUvZxSTjyVLGU7PQ	AI horizons: Navigating regulations, safety, and the use of AI	"In today's episode:  Following the recent AI Safety Summit hosted by British Prime Minister Rishi Sunak, Bart Hogeveen speaks with the European Union's Senior Envoy for Digital to the United States Gerard de Graaf. They discuss the EU's approach to AI regulation and how it differs from the US and other governments. They also discuss which uses of AI the EU thinks should be limited or prohibited and why, as well as provide suggestions for Australia's efforts to regulate AI.  Finally, Alex Caples, speaks to Australian Federal Police (AFP) Commander Helen Schneider. They discuss the AFP and Monash University initiative 'My Pictures Matter, which uses artificial intelligence to help combat child exploitation. They also explore the importance of using an ethically sourced database to train the AI tool that is used in the project, as well as outline how people can get involved in the campaign and help end child exploitation in Australia and overseas.  Mentioned in this episode:  https://mypicturesmatter.org/  Guests: Bart Hogeveen Gerard de Graaf Alex Caples Helen Schneider  Music: ""Think Different"" by Scott Holmes, licensed with permission from the Independent Music Licensing Collective - imlcollective.uk"	2023-12-15T00:00:00	2405482	ASPI Podcast: Policy, Guns & Money	Policy, Guns & Money is produced by the Australian Strategic Policy Institute (ASPI).    ASPI is an independent, non-partisan think tank that produces expert and timely advice for strategic and defence leaders. ASPI has offices in Canberra, Australia and Washington DC, USA.	The Australian Strategic Policy Institute	258
149	266jyvfP5GFenrM5GQqPuX	Bytesize Legal Update - The EU AI Act: A Comprehensive Regulation of Artificial Intelligence	In this latest podcast Oliver Proust, a Partner in Fieldfisher's Technology and Data team based in our Brussel's office, delves into the latest developments surrounding the EU AI Act.Olivier provides a comprehensive overview of the key provisions and implications of this ground breaking legislation that aims to regulate artificial intelligence (AI) systems and their applications. Join us as we explore the classification of AI systems, the territorial scope of the AI Act, its enforcement mechanisms, and the timeline for its implementation.	2023-12-14T00:00:00	1071542	Bytesize Legal Updates | Fieldfisher	Fieldfisher are experts in European digital regulation and guide businesses through the complexities of the EUs rapidly evolving regulatory environment. Europe is one of the worlds largest internal markets - with our focus on digital regulation for online platforms, social media and emerging technologies (AI, automation, AR/VR etc.) we keep you up-to-date with the EUs digital agenda, and latest impacting European legislation for the industry.	Fieldfisher	17
150	1CifWWZmHr9d7uSXE2DjwT	Exploring the Future of AI Regulation With a Congressional Insight	Navigating the complexities of AI isnt just about technology. Its about sculpting our future. In this episode, Im joined by Congressman Jay Obernolte, representing Californias 23rd district and serving as the vice-chair of the congressional AI caucus. With a rich background in AI and a keen eye for policy, Congressman Obernolte offers invaluable insights into the intricate dance of AI innovation and regulation.Key Takeaways:(02:06) Assessing President Bidens Executive Order on AI and concerns of regulatory overreach.(04:54) Exploring the Create AI Acts goal to democratize AI research across academia.(06:41) Addressing the risk of regulatory capture in the AI industry.(08:57) Evaluating the role of AI in hiring and the inherent challenges of bias.(11:05) Debating the need for a new AI regulatory structure.(14:25) Delving into the implications of open-source AI.(16:08) Highlighting the role of AI in spreading misinformation and the importance of transparency.(18:19) Emphasizing the need for diverse perspectives in shaping AI regulation.(19:44) Advocating for federal over regional or global AI regulation models.(21:42) Offering predictions on the timeline and direction of comprehensive AI legislation in Congress.Resources Mentioned:Congressman Jay Obernolte - https://www.linkedin.com/in/jayobernolte/President Bidens Executive Order on AI - https://www.whitehouse.gov/Create AI Act - https://www.congress.gov/Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.#AIRegulation #AISafety #AIStandard	2023-12-14T00:00:00	1437857	Regulating AI: Innovate Responsibly	Welcome to the Regulating AI: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.  You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.  Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!	Sanjay Puri	37
151	0K8O5MSr3RNpahaLmNYCRW	Episode 1114 - Dec 11 - 2023 - Chinas Short-Sighted AI Regulation - Vina Technology at AI time	Chinas Short-Sighted AI Regulation Angela Huyue Zhang - Project Syndicate - 8 December, 2023 From the government to the courts, Chinese authorities have become fixated on ensuring that the country can surpass the US to become the global leader in artificial intelligence, no matter the cost. They seem not to realize just how high that cost may turn out to be. The Beijing Internet Courts ruling that content generated by artificial intelligence can be covered by copyright has caused a stir in the AI community, not least because it clashes with the stances adopted in other major jurisdictions, including the United States. In fact, that is partly the point: the ruling advances a wider effort in China to surpass the US to become a global leader in AI. Not everyone views the ruling as all that consequential. Some commentators point out that the Beijing Internet Court is a relatively low-level institution, operating within a legal system where courts are not obligated to follow precedents. But, while technically true, this interpretation misses the point, because it focuses narrowly on Chinese law, as written. In the Chinese legal context, decisions like this one both reflect and shape policy. In 2017, Chinas leaders set the ambitious goal of achieving global AI supremacy by 2030. But the barriers to success have proved substantial  and they continue to multiply. Over the last year or so, the US has made it increasingly difficult for China to acquire the chips it needs to develop advanced AI technologies, such as large language models, that can compete with those coming out of the US. President Joe Bidens administration further tightened those regulations in October. In response to this campaign, Chinas government has mobilised a whole-of-society effort to accelerate AI development, channeling vast investment toward the sector and limiting regulatory hurdles. In its Interim Measures for the Management of Generative Artificial Intelligence Services  which entered into effect in August  the government urged administrative authorities and courts at all levels to adopt a cautious and tolerant regulatory stance toward AI. If the Beijing Internet Courts recent ruling is any indication, the judiciary has taken that guidance to heart. After all, making it possible to copyright some AI-generated content not only directly strengthens the incentive to use AI, but also boosts the commercial value of AI products and services. Conversely, denying copyrights to AI-generated content could inadvertently encourage deceptive practices, with digital artists being tempted to misrepresent the origins of their creations. By blurring the lines between AI-generated and human-crafted works, this would pose a threat to the future development of AI foundational models, which rely heavily on training with high-quality data sourced from human-generated content. For the US, the benefits of prohibiting copyright protection for AI-generated content seem to outweigh the risks. The US Copyright Office has refused to recognise such copyrights in three cases, even if the content reflects a substantial human creative or intellectual contribution. In one case, an artist tried over 600 prompts  a considerable investment of effort and creativity  to create an AI-generated image that eventually won an award in an art competition, only to be told that the copyright would not be recognised. This reluctance is hardly unfounded. While the Beijing Internet Court ruling might align with Chinas AI ambitions today, it also opens a Pandoras box of legal and ethical challenges. [implies that the decision has opened up a host of difficulties that were previously contained or not fully understood. In this context, the challenges likely revolve around the legal and ethical implications of AI-generated artworks, including copyright disputes and the broader question of whether creators should be compensated for the use of their AI-generated works in training other AI systems. 	2023-12-12T00:00:00	450648	Vina Technology at AI time - Cng ngh Vit Nam thi AI	Kin thc Khoa hc v K thut bng ting Vit, ting Anh v nhiu ngoi ng khc.  c bit quan tm n cc vn  c lin quan n Tr tu Nhn to v c bit v X l Ngn ng t nhin	L Quang Vn	1961
152	0PPMtH7FBkr6To85jYm7eb	AI Regulation, Rivalries, and Advancements  A Synthetic Overview for Executives	Unveiling cutting edge developments in AI  EU regulation proposals, Google's AI Gemini vs OpenAI's GPT 4, Microsoft's AI ventures, Meta's novel AI features, and innovative startups in generative AI and language models are all highlighted to inform top tech decision makers on current trends and shifts in the AI landscape.	2023-12-08T00:00:00	508776	Daily Artificial Intelligence News and Trends	Latest news and updates about the rapidly changing world of AI. Created for anyone interested in current AI trends.	The Daily Dive	267
153	3jZqBnatVZuxT6rSQ5JlFs	Evening Edition: The Rise Of A.I. Bringing Calls For Regulation	From the boardroom to the battlefield, nations across the world are simultaneously trying to take the lead in artificial intelligence technology while balancing the need to put up guardrails as well.  The A.I. Summit at the Jacob Javits Center in New York City brought many of the industrys biggest players. FOXs Eben Brown spoke to The Big Money Shows Jackie DeAngelis about the emerging tech being showcased at this summit and the questions surrounding just how much to regulate AI in the future. Learn more about your ad choices. Visit megaphone.fm/adchoices	2023-12-07T00:00:00	762331	The Fox News Rundown	The FOX News Rundownis the place to find in-depth reporting on the news that impacts you. Each morning, Mike Emanuel, Dave Anthony, Lisa Brady, Jessica Rosenthal, and Chris Foster take a deep dive into the major and controversial stories of the day, tapping into the massive reporting resources of FOX News to provide a full picture of the news.  Plus, every night, The FOX News Rundown: Evening Edition brings you even more coverage of the day's biggest stories and on the weekend, youll hear everything thats going on in the beltway with The FOX News Rundown: From Washington and special uncut, unedited interviews with The FOX News Rundown: Extra.  Each day The FOX News Rundown features insight from top newsmakers, along with FOX News reporters and contributors, plus a daily commentary on a significant issue of the day. Check us out twice a day, every day.	FOX News Radio	575
154	24B0wzQTDkdvWZS2wfB8Ta	EU's Groundbreaking AI Regulations Face Uncertainty Amid Rise of Generative AI and Big Tech Opposition	European Union's Artificial Intelligence Act, the world's first comprehensive AI regulation, is facing pushback on the final details of governance systems that support AI services. Authorities are navigating between big tech companies advocating against stifling innovation and the necessity of safeguards for advanced AI systems. EU lawmakers' effort to include regulations extended to foundation models has sparked resistance from the EU's three biggest economies advocating for self-regulation. Amidst this, global powers like the U.S., U.K., China are rushing to set regulatory boundaries as the technology advances rapidly.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-12-06T00:00:00	295465	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
155	52ffELrYAp84jtURHjEZ9V	Navigating the Global Web of AI Regulation with Yasin Tokat, Policy Group Member at the Center for AI and Digital Policy	Yasin Tokat, Policy Group Member at the Center for AI and Digital Policy, joins this episode of AI, Government, and the Future by Alan Pentz to unveil the global challenges in AI regulation and how to solve them. They dive into the divergent paths of AI regulation in the US and EU, the risks of AI in cyberattacks, and the urgent need for global collaboration in AI policy and data privacy.	2023-12-06T00:00:00	1862038	AI, Government, and the Future	Welcome to AI, Government, and the Future, a podcast by Corner Alliance. We explore the intersection of artificial intelligence, government, and the future. Join us as we dive into the latest AI advancements, government policies, and innovative strategies to shape the future of our society. Whether you're a policy maker, venture capitalist, academic, or industry leader, this podcast will provide valuable insights and thought-provoking discussions to help you navigate the evolving landscape of AI and its impact on government. Tune in to AI, Government, and the Future to stay ahead in this transformative era.	Corner Alliance	38
156	11yLvxCDOl6xSlNO9rIMl1	G7's AI Agreement, EU's AI Act Struggles & GPT Store Delay	In today's episode, we discuss the G7's ground-breaking international agreement on guidelines for artificial intelligence, aimed at curbing the spread of false information and addressing the dangers of generative AI. Next, we delve into the challenges the EU is experiencing with the proposed AI Act, especially concerning generative systems like ChatGPT and the growing debate over self-regulation or strict rules for AI model makers.	2023-12-03T00:00:00	106657	AI News Today	"Stay ahead of the curve in the fast world of AI! Every day, we bring you the top 3 AI headlines that matter, offering a concise snapshot of the latest developments, breakthroughs, and trends. Whether you're an industry professional, tech enthusiast, or just curious about the future, ""AI News Today"" is your essential daily briefing. Tune in for insights in under 5 minutes and ensure you never miss a beat in the ever evolving realm of artifical intelligence, machine learning and neural networks."	Mike Russell	288
157	79MoBLTBCnEX8O2caqYJw2	The Ethical AI Dialogues: From Bias to Regulation	Ethical AI is about knowing to do good with AI and Responsible AI is how we do good with AI.  - MrinalLearn more about what ethical AI encompasses, how companies are using Responsible AI to operationalize AI solutions, and how much we need to follow the AI regulations to cause no harm whatsoever. #artificialintelligence #largelanguagemodels #ethicalai #responsibleai   	2023-11-29T00:00:00	1629831	AI Chronicles	Conversations about everything AI	Sonam Gupta	23
158	2SdlFkZqElGPDAdvqaOnAi	How are governments approaching AI regulation?	As we keep hearing everyday, artificial Intelligence is on the verge of fundamentally changing the way human beings live and work. There are also many fears about the dangers posed by AI  which range from mass disinformation and privacy risks, to extinction of the human race itself.  Amid this debate over how to regulate AI so that we are able to benefit from it while keeping it safe, governments around the world have been coming up with proposals for AI governance. The latest is the Biden administrations Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.  What are the concerns shaping these preliminary moves toward AI regulation? Are there any fundamental principles that an AI regulatory regime needs to address? What are the potential conflicts  say, between the interests of AI researchers and ordinary citizens --- when framing such laws?	2023-11-16T00:00:00	2420610	In Focus by The Hindu	A podcast from The Hindu that delves deep into current developments with subject experts, and brings in context, history, perspective and analysis.	The Hindu	839
159	4YsZhmUUzuRytqZ1r2idxF	#72: Our Hands-On Experiments with GPTs, Is AGI Coming Soon?, and New AI Wearable From Ex-Apple Veterans	Last week's episode of The Marketing AI Show delved into the recent GPT announcement and this week, we're taking it a step further with insights from our hands-on testing. Join us in episode 72 as we explore the latest capabilities of GPTs, delve into predictions about the rapid approach of AGI, and share our thoughts on the newly released AI wearable by former Apple experts, which, frankly, met our expectations. Stay tuned for an in-depth analysis and much more in this exciting episode!  00:01:49  Our Hands-on testing with GPTs 00:23:31  A new paper was released that proposes a framework for classifying AGI 00:36:30  Wearable 'Ai Pin' launched by Humane 00:44:48  Bill Gates claims AI is going to completely change how you use computers 00:47:59  The Actors Strike in Hollywood has come to an end 00:50:28  Meta to require advertisers to disclose AI content in political ads 00:54:13  Microsoft announces five steps to protect electoral processes in 2024 00:56:52  Amazon is training a new large language model, Olympus 00:59:43 Google AI features across performance max campaigns within Google Ads Meet Akkio, the generative business intelligence platform that lets agencies add AI-powered analytics and predictive modeling to their service offering. Akkio lets your customers chat with their data, create real-time visualizations, and make predictions. Just connect your data, add your logo, and embed an AI analytics service to your site or Slack. Get your free trial at akkio.com/aipod.  Listen to the full episode of the podcast: https://www.marketingaiinstitute.com/podcast-showcase Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home  Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute	2023-11-14T00:00:00	3918132	The Artificial Intelligence Show	The Artificial Intelligence Show (formerly The Marketing AI Show) is the podcast that helps your business grow smarter by making AI approachable and actionable.This podcast is brought to you by the creators of the Marketing AI Institute, AI Academy for Marketers, and the Marketing AI Conference (MAICON). Hosts Paul Roetzer, founder and CEO of Marketing AI Institute, and Mike Kaput, Chief Content Officer, break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join Paul and Mike on The AI Show as they work to accelerate AI literacy for all.	Paul Roetzer and Mike Kaput	100
160	2rLAHQwPV7GtBjyIy3jxBh	AI News 11/13/23: US AI Regulation and The State of AI	"In this episode of AI Equation, we dive into two compelling stories shaping the world of artificial intelligence. First, we explore President Biden's groundbreaking executive order on AI regulations, which marks a significant leap in AI governance. The order focuses on safety, security, and reliability, mandating transparency for advanced AI systems, setting strict standards for comprehensive safety evaluations, addressing AI's potential role in engineering dangerous biological materials, and combating AI-enabled fraud and deception. It also promotes equity, civil rights, and workforce impact mitigation while seeking international collaboration for trustworthy AI development. Story 2 takes us into the ever-evolving landscape of AI, with a focus on Large Language Models (LLMs) and transformers' rapid advancements. We delve into the dominance of GPT-4 and the ongoing debate surrounding governance and safety in AI. Openness and safety remain central themes, as we discuss the rise of open-source alternatives like Meta AI's LLaMa model family. We also explore AI's progress in various domains, from navigation and weather predictions to self-driving cars and music generation. The episode highlights key takeaways from the report, including the ascent of compute as the new currency and the challenges in evaluating state-of-the-art models. GenAI startups, generative AI applications, and safety concerns in the AI community are also discussed. Lastly, we feature Aidan Gomez, CEO and co-founder of Cohere, recognized as one of Time's 100 most influential people in AI. Gomez co-authored the groundbreaking ""Attention Is All You Need"" paper, which introduced the transformer and revolutionized AI. He now leads Cohere, a company empowering businesses to integrate AI into their products. Cohere recently secured significant funding and aims to bridge the gap between AI theory and practical implementation."	2023-11-13T00:00:00	224862	AI Equation: The Future of Content Creation	The AI Equation is a captivating podcast that delves into the symbiotic relationship between artificial intelligence (AI) and business success. Hosted by industry experts and thought leaders, this show explores the transformative impact of AI on various industries, unveiling the strategies, innovations, and success stories that have reshaped the business landscape.	AI Labs	82
161	66kRXy84lBf483fXncfgFe	Government regulation of AI has arrived	On Monday, October 30, 2023, the U.S. White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.  Two days later, a policy paper was issued by the U.K. government entitled The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.  It was signed by 29 countries, including the United States and China, the global leaders in AI research. In this Fully Connected episode, Daniel and Chris parse the details and highlight key takeaways from these documents, especially the extensive and detailed executive order, which has the force of law in the United States.	2023-11-07T00:00:00	2704777	Practical AI: Machine Learning, Data Science	Making artificial intelligence practical, productive & accessible to everyone.  Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!	Changelog Media	273
162	3ESLam5ZTZw1j5ESwRBoLF	AI Regulation Goes Mainstream  LIVE!	The hosts of the #1 Futurist Podcast THE FUTURISTS are back to talk emerging Artificial Intelligence regulation from Biden's latest executive order, the UK and EU positions, and China's take on AI. We also discuss Marc Andreessen's TechnoOptimist Manifesto and why the Tech Giants aren't necessarily the best people to be defining AI regulation.	2023-11-03T00:00:00	3758053	The Futurists	#1 Global Fintech Network	Provoke.fm	110
163	4J0GW7J7iqgU8ywP8fLCMm	#21: AI Regulation, War and the Future of the American Tech Stack	"The gang shares predictions around AI for the year ahead. Are we at digital war and don't even know it? Thoughts on defining an ""American Tech Stack"" and digital citizenship to defend against foreign speech and bots - and how to think about academia in an era where universities and research are no longer disconnected from politics and business.   Discussed Greeking Out from National Geographic Kids Photoleap by Lightricks AI Photo Editing App Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence FACT SHEET: PresidentBiden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence New: Leica M11-P  Show Notes [00:00:00] Opening [00:01:37] Halloween and the Greeking Out Podcast [00:03:17] Brit hosted a Howl-O-Ween Dog Costume Contest. [00:03:36] Investing in Dog DNA Startups [00:05:55] Product Idea: AI Dog [00:06:25] AI Ninja Tricks [00:07:50] The Big Debate: The Biden Executive Order on ""Safe, Secure, and Trustworthy Artificial Intelligence"" [00:10:36] We need National Tech Stacks. [00:15:28] Every single Federal Agency was directed to appoint a Head of AI.  [00:18:35] Lack of transparency at AI.gov?  [00:19:31] The difference between academic research and application has collapsed and it is dangerous.  [00:20:50] OpenAI took research from the Google world and commercialized it.  [00:21:20] National AI research should be like the Manhattan Project. [00:22:39] The Executive Order invokes the Defense Production Act. [00:24:03] We have an AI War already playing out on American Turf and it is called TikTok. [00:24:22] We are at war in the technological world and we have no defenses in terms of how this is playing out on our home front.  [00:25:20] We are going to look back in 20 years and realize ""Oh my God, we were actually at war, and we were being assaulted on the home front but we don't even fully recognize the attack."" [00:25:30] What is going to happen in 2024? [00:26:10] Silicon Valley technologist are good at problems where a ""solution"" is the goal, but are uniquely bad at Washington style problems where ""balance"" is the goal.  [00:27:29] We need to take the American Tech Stack very very seriously.  [00:28:35] TikTok's Larry Ellison Proposal for continuing to exist in the US. [00:29:20] Why aren't conservatives demanding action on this?  [00:29:53] If we are in a war, where are we in it?  [00:30:16] American Identity and Digital Citizenship [00:32:00] What is the American Tech Stack really? [00:37:14] A hypothetical question: How would you change a government you don't like with AI? [00:39:32] Does digital identity really have to be tied to a single country? [00:41:48] Our generation has the most skewed view of globalization in history. [00:45:50] Where did this conversation start? It started with AI.  [00:46:50] AI Predictions for the next year. [00:56:04] Closing  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/moreorlesspod/message"	2023-11-03T00:00:00	3426176	More or Less	Dave Morin, Jessica Lessin, Brit Morin, and Sam Lessin have debated the future of Silicon Valley and tech as the closest of friends for the last 15 years. Now six companies, two venture funds and more than a decade at Google, Apple and Facebook later, they are opening up the debate. From The Information, Offline Ventures, and Slow Ventures.   Follow the crew:  http://threads.net/davemorin http://threads.net/brit http://threads.net/jlessin http://threads.net/lessin  Follow the pod: https://moreorlesspod.com/ http://threads.net/moreorless http://youtube.com/moreorlesspod	Dave Morin, Jessica Lessin, Brit Morin, and Sam Lessin	49
164	4DhOvn7s79XWQ75IxP6wZN	AI Regulation	John and Ben are skeptical about the need and efficacy of AI regulation via executive order.	2023-11-03T00:00:00	900048	Dithering	A fun, smart podcast from Ben Thompson and John Gruber. Two episodes per week, 15 minutes per episode. Not a minute less, not a minute more.	Ben Thompson and John Gruber	300
165	7DlraQ9E4Cde1cyytRqUod	A First Step Toward AI Regulation with Tom Wheeler	On Monday, Oct. 30, President Biden released a sweeping executive order that addresses many risks of artificial intelligence. Tom Wheeler, former chairman of the Federal Communications Commission, shares his insights on the order with Tristan and Aza and discusses whats next in the push toward AI regulation.Clarification: When quoting Thomas Jefferson, Aza incorrectly says regime instead of regimen. The correct quote is: I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. And as that becomes more developed, more enlightened, as new discoveries are made, new truths discovered, and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as civilized society to remain ever under the regime of their barbarous ancestors.RECOMMENDED MEDIAThe AI Executive OrderPresident Bidens Executive Order on the safe, secure, and trustworthy development and use of AIUK AI Safety SummitThe summit brings together international governments, leading AI companies, civil society groups, and experts in research to consider the risks of AI and discuss how they can be mitigated through internationally coordinated actionaitreaty.orgAn open letter calling for an international AI treatyTechlash: Who Makes the Rules in the Digital Gilded Age?Praised by Kirkus Reviews as a rock-solid plan for controlling the tech giants, readers will be energized by Tom Wheelers vision of digital governanceRECOMMENDED YUA EPISODESInside the First AI Insight Forum in WashingtonDigital Democracy is Within Reach with Audrey TangThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_	2023-11-02T00:00:00	2124617	Your Undivided Attention	In our podcast, Your Undivided Attention, co-hosts Tristan Harris and Aza Raskin explore the unprecedented power of emerging technologies: how they fit into our lives, and how they fit into a humane future.  Join us every other Thursday as we confront challenges and explore solutions with a wide range of thought leaders and change-makers  like Audrey Tang on digital democracy, neurotechnology with Nita Farahany, getting beyond dystopia with Yuval Noah Harari, and Esther Perel on Artificial Intimacy: the other AI.  Your Undivided Attention is produced by Executive Editor Sasha Fegan and Senior Producer Julia Scott. Our Associate Producers are Sara McCrea and Kirsten McMurray. We are a top tech podcast worldwide with more than 20 million downloads and a member of the TED Audio Collective.	Tristan Harris and Aza Raskin, The Center for Humane Technology	111
166	0JXtNnLo8xEFbWHG9GFxmu	US Advocates for Global Artificial Intelligence Safeguards Amid Rising Threats	Vice President Kamala Harris acknowledges the urgent need for international regulations against the potential threats posed by evolving AI technology. She emphatically stresses that leaders have an ethical and moral obligation to prevent misuse of AI in sectors like military, healthcare, and art, while President Joe Biden establishes standards for safety test disclosures by AI developers.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-11-02T00:00:00	220414	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
167	7mgXbBLrYI4xgbjQ4Kz7DA	Microsoft's Use of AI in Curating News Prompts Controversy and Concern	Microsoft's increased reliance on artificial intelligence and automation rather than human editors to curate news on its homepage has drawn criticism and sparked controversy. The move is reportedly behind the recent spread of false, sensationalist, and bizarre stories on the platform, damaging the credibility of news partners and even amplifying misinformation. Critics argue this highlights potential dangers in the irresponsible application of AI in news dissemination.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-11-02T00:00:00	181865	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
168	1q6FgiuIYSeAd4nSP1Svtp	Governments dip into AI regulation	Sam Bankman-Frieds fraud trial is set to wrap up today, eurozone inflation fell to its lowest level for more than two years, and Odey Asset Management is to close after allegations of sexual assault and harassment against its founder. Plus, global political leaders and tech executives will gather in the UK next week to discuss risks of artificial intelligence.Mentioned in this podcast:He said, they said: Sam Bankman-Fried jury weighs duelling accounts of FTXs downfallEurozone inflation falls more than expected to 2.9%Odey Asset Management to close after sexual assault allegations against founderHow Sunaks Bletchley Park summit aims to shape global AI safetyThe FT News Briefing is produced by Fiona Symon, Sonja Hutson, Kasia Broussalian and Marc Filippino. Additional help by Monica Lopez, Peter Barber, Michael Lello, David da Silva and Gavin Kallmann. Topher Forhecz is the FTs executive producer. The FTs global head of audio is Cheryl Brumley. The shows theme song is by Metaphor Music.Read a transcript of this episode on FT.com Hosted on Acast. See acast.com/privacy for more information.	2023-11-01T00:00:00	690782	FT News Briefing	A rundown of the most important global business stories you need to know for the coming day, from the newsroom of the Financial Times. Available every weekday morning. Hosted on Acast. See acast.com/privacy for more information.	Financial Times	1547
169	65P4YAcvYEiCeAnCXbV9Fg	Exposing the Prejudices Inside Artificial Intelligence	"Today's episode unpacks the complex issue of bias in artificial intelligence. We explore how bias emerges through training data, algorithms, and human prejudices. Looking at real examples of biased AI in hiring, healthcare, and facial recognition, we see how bias leads to discriminatory impacts that amplify injustice. Steps like enhancing data diversity, algorithm adjustments, and monitoring for fairness can help mitigate bias. But completely eliminating it remains incredibly difficult, often requiring tradeoffs between competing values. There are no perfect solutions yet. Going forward, transparency, testing for disparate impacts across groups, and centering ethics and accountability will be critical. The stakes are high, as these systems shape more of our lives. But through thoughtful, cross-disciplinary dialogue and vigilance, we can strive to build AI that is fairer than our human biases. This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.  Music credit: ""Modern Situations by Unicorn Heads"""	2023-11-01T00:00:00	878305	A Beginner's Guide to AI	"""A Beginner's Guide to AI"" makes the complex world of Artificial Intelligence accessible to all. Each episode breaks down a new AI concept into everyday language, tying it to real-world applications and featuring insights from industry experts. Ideal for novices, tech enthusiasts, and the simply curious, this podcast transforms AI learning into an engaging, digestible journey. Join us as we take the first steps into AI!  Want to get your AI going? Get in contact: dietmar@argo.berlin"	Dietmar Fischer	113
170	7BaGL6b79x4BWcbf92p6za	Biden's Executive Order Seeks to Mitigate Potential Dangers of AI Technology	President Joe Biden has issued an executive order establishing rules and guidelines for artificial intelligence to reduce safety and security risks, protect consumer privacy, promote equity and innovation, and improve the use of AI in healthcare, education, and government. This sweeping order addresses many potential risks posed by AI technology, emphasizing the need for governance in this rapidly-evolving field.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-11-01T00:00:00	195860	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
171	4tCMe2CqLEciDikolb0VCj	AI Chatbots Like ChatGPT Accused of Overusing Copyrighted News Content, Alleges Publishers Alliance	The News/Media Alliance, representing over 2,200 publishers, claims that companies developing AI tools like ChatGPT are excessively using copyrighted news content to train their models, allegedly violating intellectual property laws. The group's research suggests that such AI models extensively use information from news, magazines, and digital media sources, far more than other content types, creating a potential threat to the sustainability of both news publishers and trustworthy AI models.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-11-01T00:00:00	239657	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
172	7Hmw2jjhv4BHO4DpnKYgl2	Biden Administration Issues Stringent AI Regulations to Safeguard National Security	President Joe Biden's executive order on artificial intelligence (AI) introduces safety measures that aim to protect against potential misuse of the technology for creating destructive weapons or mounting cyberattacks. The order insists on safety tests for AI products and sets industry standards, such as watermarks for AI-driven products. The administration seeks cooperation from Congress on passing data privacy laws and encourages industries to adopt these guidelines to ensure AI's reliability, safety, and security.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-10-31T00:00:00	211134	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
173	4lkCYkXJzITzHHSjpOtFb3	Microsoft Accused of Damaging The Guardian's Reputation with Inappropriate AI-Generated Poll	The Guardian has accused Microsoft of causing reputational harm after a distastefully posed AI poll appeared alongside a news story about a woman's death. The poll solicited reader input on how the woman died, sparking backlash and confusion. The Guardian's CEO has sought assurance from Microsoft that the tech giant will seek approval before using AI technologies in conjunction with its news content.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-10-31T00:00:00	178089	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
174	2ozPRS3ucG08fDnrTsrHbX	Biden Signs Comprehensive AI Executive Order Amid Rising Democratic Debate	President Joe Biden's extensive executive order on artificial intelligence has reportedly been designed to address a variety of concerns. The order outlines key concerns such as cybersecurity, global competition, discrimination, and the technical oversight of advanced AI systems. The document has received support from both the tech industry and its critics, while critics voice concerns that the broad scale of the directive might overwhelm agencies. The order aims to appease a variety of groups within the AI landscape and its implications within the Democratic party.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-10-31T00:00:00	215209	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
175	4eVUozE8NJK0gB6lvlaXQB	The State of AI Regulation in the USA	On the eve of the Biden Administration's anticipated executive order on artificial intelligence, NLW surveys the landscape of AI legislative efforts in the US. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter:https://theaibreakdown.beehiiv.com/subscribe   Subscribe to The AI Breakdown on YouTube:https://www.youtube.com/@TheAIBreakdown   Join the community:bit.ly/aibreakdown   Learn more:http://breakdown.network/	2023-10-30T00:00:00	909374	The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis	A daily news analysis show on all things artificial intelligence. NLW looks at AI from multiple angles, from the explosion of creativity brought on by new tools like Midjourney and ChatGPT to the potential disruptions to work and industries as we know them to the great philosophical, ethical and practical questions of advanced general intelligence, alignment and x-risk.	Nathaniel Whittemore	385
176	2c9uHlpFWyCEBHCpWsCf3C	Biden Issues Comprehensive Executive Order on Artificial Intelligence: Aims for Safety, Privacy, and Innovation	President Biden has enacted a broad-ranging executive order on artificial intelligence, setting new standards for safety and privacy, protecting workers' rights, and promoting technological advancement. The order emphasises the obligation to leverage AI's beneficial aspects while minimising potential risks. Measures include detailed safety procedures for companies whose models could jeopardise national or public health security and initiatives to encourage AI innovation, like a pilot of the National AI Research Resource. The decree further extends to combating potential discrimination caused by AI algorithms and implementing resources for educators using AI tools.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2023-10-30T00:00:00	263017	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
177	4Zt8W2fy987HjgCnJ88A6A	Brace for Impact: AI Regulations Incoming.	Folks, I'll admit - I can sometimes get a little carried away jumping on the latest HR tech hype train. And AI has been hyped up immensely as a silver bullet to transform our field. But recently, I took a step back to dive deeper into the EU's proposed Artificial Intelligence Act and its implications. This sweeping legislation aims to pass by end of 2023 and regulate how companies can deploy AI. The EU recognizes dangers of unfettered AI in high-stakes HR areas like hiring and performance reviews. Their framework mandates transparency, testing for bias, and human oversight of AI decisions. Responsible AI governance presents challenges. But it's also a chance to embed ethics into the foundations of HR technology. Yes, AI hype still outpaces reality. As leaders, we must ensure algorithms uplift work for all - not undermine human dignity. Im optimistic about AIs potential, but sober about the risks. Lets keep discussing how HR can responsibly shape an intelligent yet human future of work.	2023-10-27T00:00:00	561502	Fullstack HR	Always discussing the full stack of HR. If you are interested in how work will evolve over the upcoming years, this podcast is for you. www.fullstackhr.io	Johannes Sundlo	59
178	3d2gQzf5S5XngkO393XxBn	European VCs & CEOs Warn of AI Over-Regulation in Podcast Episode	In this podcast episode, we dive into the concerns voiced by European venture capitalists and CEOs about the potential over-regulation of artificial intelligence. Explore the discussions on how excessive regulations could impact AI innovation and the tech industry's growth. Gain valuable insights into the delicate balance between regulation and fostering AI advancements.  Get on the AI Box Waitlist:https://AIBox.ai/Join our ChatGPT Community: https://www.facebook.com/groups/739308654562189/Follow me on Twitter: https://twitter.com/jaeden_ai  	2023-10-22T00:00:00	685174	AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic	Welcome to the AI Applied podcast, hosted by Jaeden Schafer and Conor Grennan. Get your weekly dose of the latest AI news, trends, and discussions in the rapidly evolving world of artificial intelligence.  From ChatGPT and computer vision to robotics and natural language processing, we cover the most exciting developments in the field of AI. We also discuss ways to apply it to your career, job, workflows, and life.  Join us as we delve into the cutting-edge of AI research, understand its impact on our world, and explore the future of this rapidly growing field.	Jaeden Schafer and Conor Grennan	405
179	3NJTiTUHhGZE16g1IGBXjK	How AI Disrupts The Law	Artificial Intelligence and Generative AI are changing our lives and society as a whole from how we shop to how we access news and make decisions.Are current and traditional legal frameworks and new governance strategies able to guard against the novel risks posed by new systems?How can we mitigate AI bias, protect privacy, and make algorithmic systems more accountable?How are data protection, non-discrimination, free speech, libel, and liability laws standing up to these changes?A lecture by Sandra Wachter recorded on 11 October 2023 at Barnard's Inn Hall, LondonThe transcript and downloadable versions of the lecture are available from the Gresham College website: https://www.gresham.ac.uk/watch-now/technology-lawGresham College has offered free public lectures for over 400 years, thanks to the generosity of our supporters. There are currently over 2,500 lectures free to access. We believe that everyone should have the opportunity to learn from some of the greatest minds. To support Gresham's mission, please consider making a donation: https://gresham.ac.uk/support/Website: https://gresham.ac.ukTwitter: https://twitter.com/greshamcollegeFacebook: https://facebook.com/greshamcollegeInstagram: https://instagram.com/greshamcollegeSupport the Show.	2023-10-18T00:00:00	3542648	Gresham College Lectures	Gresham College has been providing free public lectures since 1597, making us London's oldest higher education institution. This podcast offers our recorded lectures that are free to access from the Gresham College website, or our YouTube channel.	Gresham College	2812
180	6v9d8zPXth0oHyuCDYtWhB	How Deepfakes Could Undermine Voters in 2024 Presidential Election	"This episode explored the alarming implications of exponentially advancing deepfake technology and its potential to unfairly manipulate voters in the 2024 US presidential election through hyper-realistic fake videos. We peeled back the layers on how these AI-generated fabrications work and why they represent an unprecedented threat to an informed democracy in the digital age. Equipping citizens with media literacy and critical thinking is crucial, as is pressuring platforms and lawmakers to responsibly regulate political deepfakes. But the clock is ticking as the election approaches. Listen in to hear insights from experts on proactive solutions needed to safeguard the truth and protect voters from deception. forewarned is forearmed against this insidious new technological trickery.   This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.  Music credit: ""Modern Situations by Unicorn Heads"""	2023-10-17T00:00:00	955627	A Beginner's Guide to AI	"""A Beginner's Guide to AI"" makes the complex world of Artificial Intelligence accessible to all. Each episode breaks down a new AI concept into everyday language, tying it to real-world applications and featuring insights from industry experts. Ideal for novices, tech enthusiasts, and the simply curious, this podcast transforms AI learning into an engaging, digestible journey. Join us as we take the first steps into AI!  Want to get your AI going? Get in contact: dietmar@argo.berlin"	Dietmar Fischer	113
181	6PrlZDBfG7B2yjp8bHnbaA	AI News 10/2/23: Writer AI and UK Regulation	In this episode of AI Equation, we embark on a journey through two captivating AI narratives that are shaping the industry.  Join us as we unpack the remarkable ascent of Writer, a generative AI startup that recently secured an astounding $100 million in a Series B funding round. Founded in 2020, Writer has experienced exponential growth, with revenue skyrocketing tenfold over the past two years. Learn why Writer is making waves and why enterprises are making the switch from Azure OpenAI to harness its superior AI outputs. Discover how Writer seamlessly integrates with daily workplace tools, revolutionizing content creation and enhancing efficiency. With fine-tuned models and a focus on privacy and security, Writer is carving a unique niche in the competitive AI landscape.  In our second story, we delve into the UK's groundbreaking efforts to effectively regulate AI. The Competition and Markets Authority (CMA) introduces a set of principles designed to regulate large AI models like ChatGPT. These principles emphasize developer accountability, interoperability, preventing anti-competitive behavior, and fostering access to AI models. We explore the significance of these principles for the global AI landscape and their potential impact on tech giants. With the AI safety summit in Berlin on the horizon, we analyze the UK's journey to assert its influence in the AI sphere.  Join us as we unravel these thought-provoking AI narratives and gain insights into the rapidly evolving world of artificial intelligence. Don't miss this episode of AI Equation! 	2023-10-02T00:00:00	218790	AI Equation: The Future of Content Creation	The AI Equation is a captivating podcast that delves into the symbiotic relationship between artificial intelligence (AI) and business success. Hosted by industry experts and thought leaders, this show explores the transformative impact of AI on various industries, unveiling the strategies, innovations, and success stories that have reshaped the business landscape.	AI Labs	82
182	2j622RIiE7rEwEQJQbisRM	Our AI Overlords Head to Capitol Hill to Discuss Regulation	Our AI Overlords Head to Capitol Hill to Discuss Regulation Only one representative from a non-business interest advocating for the rest of us :)	2023-09-20T00:00:00	664354	The New Next	Welcome to The New Next, the podcast where tomorrow is today, and the future is now. Together we'll embark on a journey through the evolving landscapes of technology, innovation, and the current events shaping our world.   Join us as we break down complex topics into actionable insights.   Whether you're a tech enthusiast, a forward-thinker, or just curious about the world's next chapter, you're in the right place.  So, plug in, power up, and prepare to dive deep into the heart of what's new, what's next, and what it all means for you.   Find us at https://www.matthewadjensen.com	Matthew Jensen	214
183	4j8c0G0A9TgSI870utQaZ2	Episode 15: The White House And Big Tech Dance The Self-Regulation Tango, August 11 2023	Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms.Plus a full portion of Fresh Hell...and a little bit of good news.References:White House press release on voluntary commitmentsEmilys blog post critiquing the voluntary commitmentsAn AI safety infused take on regulationAI Causes Real Harm. Lets Focus on That over the End-of-Humanity HypeAI Hurts Consumers and Workers  and Isnt IntelligentFresh AI Hell:Future of Life Institute hijacks SEO for EU's AI ActLLMs for denying health insurance claimsNHS using AI as receptionistAutomated robots in receptionCan AI language models replace human research participants?A recipe chatbot taught users how to make chlorine gasUsing a chatbot to pretend to interview Harriet TubmanWorldcoin Orbs & iris scansMartin Shkrelis AI for health start upAuthors impersonated with fraudulent books on Amazon/GoodreadsGood News:You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily  Twitter: https://twitter.com/EmilyMBender  Mastodon: https://dair-community.social/@EmilyMBender  Bluesky: https://bsky.app/profile/emilymbender.bsky.social   Alex  Twitter: https://twitter.com/@alexhanna  Mastodon: https://dair-community.social/@alex  Bluesky: https://bsky.app/profile/alexhanna.bsky.social   Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.	2023-09-20T00:00:00	3845799	Mystery AI Hype Theater 3000	Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.	Emily M. Bender and Alex Hanna	32
184	1zTFkyTijDnI8bbPXXogqM	Teaching AI Right from Wrong: The Quest for Alignment	"This episode explored the concept of AI alignment - how we can create AI systems that act ethically and benefit humanity. We discussed key principles like helpfulness, honesty and respect for human autonomy. Approaches to translating values into AI include techniques like value learning and Constitutional AI. Safety considerations like corrigibility and robustness are also important for keeping AI aligned. A case study on responsible language models highlighted techniques to reduce harms in generative AI. While aligning AI to human values is complex, the goal of beneficial AI is essential to steer these powerful technologies towards justice and human dignity. This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output. Music credit: ""Modern Situations by Unicorn Heads""  --- CONTENT OF THIS EPISODE AI ALIGNMENT: MERGING TECHNOLOGY WITH HUMAN ETHICS  Welcome readers! Dive with me into the intricate universe of AI alignment.  WHY AI ALIGNMENT MATTERS  With AI's rapid evolution, ensuring systems respect human values is essential. AI alignment delves into creating machines that reflect human goals and values. From democracy to freedom, teaching machines about ethics is a monumental task. We must ensure AI remains predictable, controllable, and accountable.  UNDERSTANDING AI ALIGNMENT  AI alignment encompasses two primary avenues:  Technical alignment: Directly designing goal structures and training methods to induce desired behavior. Political alignment: Encouraging AI developers to prioritize public interest through ethical and responsible practices.  UNRAVELING BENEFICIAL AI  Beneficial AI revolves around being helpful, transparent, empowering, respectful, and just. Embedding societal values into AI remains a challenge. Techniques like inductive programming and inverse reinforcement learning offer promising avenues.  ENSURING TECHNICAL SAFETY  Corrigibility, explainability, robustness, and AI safety are pivotal to making AI user-friendly and safe. We want machines that remain under human control, are transparent in their actions, and can handle unpredictable situations.  SPOTLIGHT ON LANGUAGE MODELS  Large language models have showcased both potential and risks. A case in point is Anthropic's efforts to design inherently safe and socially responsible models. Their innovative ""value learning"" technique embeds ethical standards right into AI's neural pathways.  WHEN AI GOES WRONG  From Microsoft's Tay chatbot to biased algorithmic hiring tools, AI missteps have real-world impacts. These instances stress the urgency of proactive AI alignment. We must prioritize ethical AI development that actively benefits society.  AI SOLUTIONS FOR YOUR BUSINESS  Interested in integrating AI into your business operations? Argo.berlin specializes in tailoring AI solutions for diverse industries, emphasizing ethical AI development.  RECAP AND REFLECTIONS  AI alignment seeks to ensure AI enriches humanity. As we forge ahead, the AI community offers inspiring examples of harmonizing science and ethics. The goal? AI that mirrors human wisdom and values.  JOIN THE CONVERSATION  How would you teach AI to be ""good""? Share your insights and let's foster a vibrant discussion on designing virtuous AI.  CONCLUDING THOUGHTS  As Stanislas Dehaene eloquently states, ""The path of AI is paved with human values."" Let's ensure AI's journey remains anchored in human ethics, ensuring a brighter future for all.  Until our next exploration, remember: align with what truly matters."	2023-09-15T00:00:00	1095802	A Beginner's Guide to AI	"""A Beginner's Guide to AI"" makes the complex world of Artificial Intelligence accessible to all. Each episode breaks down a new AI concept into everyday language, tying it to real-world applications and featuring insights from industry experts. Ideal for novices, tech enthusiasts, and the simply curious, this podcast transforms AI learning into an engaging, digestible journey. Join us as we take the first steps into AI!  Want to get your AI going? Get in contact: dietmar@argo.berlin"	Dietmar Fischer	113
185	5J4LdGyg1WERMWeAvP7wTf	AI Chats Episode 31: Current State of AI Regulation in the UK and Europe	Welcome to the new world of AI. Today, for our latest episode, we are going to talk about the current status in both the UK and the EU towards the regulation of AI. The press has recently been full of doomsday stories about AI, and whilst perhaps some of this is very overplayed, governments are certainly waking up to the need to consider whether protections should be put in place to control AI so as to protect against its possible harms. However, as we will explore today, there is of course the possibility for different approaches to be taken, depending on the perception of the risk that arises from these technologies and the view taken of the risk versus the possible benefits that arise from it.Featured Speakers:Dina Blikshteyn, Partner at Haynes Boone.James Brown, Partner at Haynes Boone.AI Chats a podcast produced by Haynes Boone. 2024 Haynes and Boone, LLP All rights reserved.	2023-09-12T00:00:00	2201208	AI Chats	Welcome to AI Chats, a podcast series by Haynes and Boone, LLP covering a broad range of technology and legal topics in artificial intelligence (AI) and deep learning. Renowned AI theorists and developers join Haynes and Boone lawyers to discuss the current and prospective uses of AI in a broad range of technology areas, including machine learning, healthcare, autonomous driving, cloud computing, natural language processing, and medical devices. We discuss novel legal issues presented by using AI in business operations, intellectual property portfolio development, privacy and data protection, and employment law, among other topics.	Haynes and Boone, LLP	37
186	1836QVhTRetZytcnoqQ3NW	AI confidence by design, from regulation to explanation. With Dr Mike Nix- Lead AI Clinical Scientist at Leeds NHS trust	This episode focusses on the topic of confidence in healthcare AI technologies. My guest is Dr Mike Nix, Lead AI Clinical Scientist at Leeds NHS trust.  Discussion topics include:   How AI errors and human errors are fundamentally different, and why thats so important in the context of healthcare AI  The concept of explainable AI and its potential limitations in the context of human cognitive biases and decision making  The role of regulation in supporting AI confidence   Link to reports on Healthcare workers' confidence in AI: https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/dart-ed/horizon-scanning/understanding-healthcare-workers-confidence-in-ai https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/dart-ed/horizon-scanning/developing-healthcare-workers-confidence-in-ai	2023-09-11T00:00:00	2026312	Digital Health Section Podcast- Royal Society of Medicine 	Discover how digital technologies are transforming healthcare through interviews with leading digital health experts. Presented by Dr Annabelle Painter   All views expressed in this podcast are of the speakers themselves and not of the RSM.   Find out more about the RSM digital council:  rsm.ac/dhsectionpodcast  Follow us: #RSMdigihealth	RSM Digital Health Council	30
187	0rdJwj299nyUobvUGRepUT	#76 AI regulation in Australia - the ideas are in	"This week we review responses to the Australian Government's open consultation on how to mitigate the potential risks of AI. In June, the Government called for submissions to its discussion paper titled""Safe and responsible AI in Australia"". While the submissions haven't been published, several have made their way into the public domain. As well as sharing the recommendations of our own (elevenM) submission, we explore proposals from big tech giants Microsoft and Google, various members of academia, think-tanks such as the UTS Human Technology Institute, and the Australian Human Rights Commission.  Links: Article about AI replacing Taylor Swift (SMH) https://www.smh.com.au/culture/music/what-s-next-for-john-lennon-a-duet-with-taylor-swift-20230810-p5dvf2.html Government discussion paper: Safe and responsible AI in Australia https://consult.industry.gov.au/supporting-responsible-ai Reporting on submission from Kingston AI Group researchers (AFR) https://www.afr.com/technology/labor-ignoring-the-elephant-in-the-room-on-ai-experts-20230804-p5du1p Submission from the Gradient Institute https://www.gradientinstitute.org/posts/disr-safe-responsible-ai-submission/ Reporting on submission from Google (AFR) https://www.afr.com/technology/google-tells-government-how-to-regulate-ai-and-who-to-blame-when-it-goes-wrong-20230728-p5ds0s Reporting on submission from Microsoft (AFR) https://www.afr.com/technology/microsoft-urges-soft-approach-as-husic-vows-to-regulate-high-risk-ai-20230721-p5dqaf Reporting on submission from UTS Human Technology Institute (InnovationAus) https://www.innovationaus.com/do-we-need-new-ai-laws-sure-but-lets-try-enforce-what-we-have-first/ Submission from Australian Human Rights Commission https://humanrights.gov.au/about/news/australia-needs-ai-regulation EU AI Act https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence  Credits:Editing and post-production by Martin Franklin (East Coast Studio) eastcoaststudio.com.auMusic by Bensound.com"	2023-08-28T00:00:00	1683821	This week in digital trust	Regular conversations about tech policy, privacy, cyber security, AI safety and everything in between.  This Week in Digital Trust is hosted by Arjun Ramachandran and Jordan Wilson-Otto, self-described technology enthusiasts with a passion for ensuring the use of technology leads to the best outcomes for humanity.  Arjun and Jordan are Principals at elevenM, a specialist privacy, cyber security and data governance consultancy in Australia. Arjun is a strategic communications expert and former journalist. Jordan is an expert in privacy regulation, policy development and program management.	elevenM	110
188	0aLFH8N1mRNymE0hnegufA	Balancing Freedom and Regulation: Logan Dopp on the Ethical Implications of AI	"The future is now, and AI is leading the charge. That's the crux of our fascinating conversation with Logan Dopp from Toren AI, an expert whose insights into the evolving world of AI will undoubtedly offer you a fresh perspective. We journey through the rapid technological changes impacting multiple sectors and delve into the exciting potential of transitioning from specific to generalized AI, and how it can streamline our lives.We further explore the revolutionary use of AI in industries like heavy manufacturing, where automation and safety are being redefined. With a focus on AI forms like computer vision and ChatGPT, we unpack their immense potential and their ethical implications - a balancing act between artistic and intellectual freedom. But, can we regulate this technology? That's the million-dollar question, and we get into the thick of this compelling debate with Logan.Finally, we take a critical look at AI's future, particularly its impact on policy and technology. Logan shares valuable insights on how AI can detect and classify objects, tailor data for customers, and optimize processes in a manufacturing plant. We also discuss AI's potential biases and underscore the importance of humans in strategic critical thinking. Get ready for an enlightening ride into the future of AI, where humans and machines collaborate to redefine efficiency and productivity.About Logan Dopp:Logan is proud to be the co-founder and CEO of Toren AI, a video intelligence company that leverages computer vision to detect events that help make workplaces safe, secure and optimized. He is an experienced operations director, and entrepreneur with proven ability to develop and field AI technologies to solve intelligence missions. Logan previously worked as the director of Counterintelligence Operations at D4C Global. He is a lecturer and consultant in academic, corporate, and government sectors on issues of security and counterintelligence. Logan has leveraged and designed emerging technologies to support counterintelligence, security, and counterterrorism missions. He envisions AI changing the way we think about cameras in all aspects of business.Listen to Logans previous episode of the Founders's Forum: Artificial Intelligence: Enhancing Efficiency and Security with Logan Doppwww.toren.ai linkedin.com/in/logan-dopp-360060This episode is brought to you by CamaPlan, A Different Way to Invest. Go to camaplan.com/foundersforum to learBe sure to click ""+ Follow"" at the top of the page, new episodes every Wednesday! Thanks for listening!Follow Marc Bernstein on Instagram, LinkedIn, and Facebook!And follow Ang Onorato on LinkedIn and Instagram!Are you a visionary founder with a compelling success story that deserves to be shared with our audience? We're on the lookout for accomplished business leaders like you to be featured on the Founders' Forum Radio Show and Podcast. If you've surmounted challenges, reached significant milestones, or have an exciting vision for the future, we'd be honored to have you as a guest on our show. Your experiences and insights can inspire and enlighten others in the business world. If you're eager to share your journey and the invaluable lessons you've learned along the way, we invite you to apply here."	2023-08-23T00:00:00	1730586	Founders' Forum	Great business stories and great people come together on Marc Bernsteins Founders Forum! Marc Bernstein and co-host, Ang Onorato talk to business founders to discuss their lives, successes, lessons, and their vision for the future. Its all about the success theyve earned and the lessons theyve learned along the way. These are American success stories and theyre not done yet!                 Your Host, Marc Bernstein                Marc Bernstein is an entrepreneur, author, and consultant. He helps high performing entrepreneurs and business owners create a vision for the future, accomplish their business and personal goals, financial and otherwise, and on helping them to see through on their intentions. Marc recently co-founded March, a forward-looking company with a unique approach to wealth management. He captured his philosophy in his #1 Amazon Bestseller, The Fiscal Therapy Solution 1.0. Marc is also the founder of the Forward Focus Forum, a suite of resources tailored specifically to educate and connect high performing entrepreneurs, and helping them realize their vision of true financial independence. Find out more about Marc and connect with him at marcjbernstein.com.              and Ang Onorato                Ang is a highly sought after human capital leader with expertise in multiple areas including conscious leadership development, executive search, business operations, and human behavioral assessment. With over 25 years of experience, she has worked with a range of clients including global Fortune 500 organizations, professional service firms, and entrepreneurs. Her background includes leading major divisions and lines of business in corporate environments, as well as being an entrepreneur herself. Angs approach is uncommon in that she blends psychology, spirituality, and business in her work. With a Masters Degree in Psychology and Social Sciences, she brings a deep understanding of human behavior to her coaching and consulting services. This allows her to guide corporate and entrepreneurial clients to develop the conscious leadership skills and mindset needed to succeed with their businesses, teams and stakeholders. Connect with Ang at angonorato.com.Are you a visionary founder with a compelling success story that deserves to be shared with our audience? We're on the lookout for accomplished business leaders like you to be featured on the Founders' Forum Radio Show and Podcast. If you've surmounted challenges, reached significant milestones, or have an exciting vision for the future, we'd be honored to have you as a guest on our show. Your experiences and insights can inspire and enlighten others in the business world. If you're eager to share your journey and the invaluable lessons you've learned along the way, we invite you to apply here. Connect with us, and let's discuss the possibility of featuring you in an upcoming episode. Join us in celebrating your success and contributing to the legacy of the Founders' Forum!	Marc Bernstein	52
189	0gMRDajzAF9bmbIMKeLYxR	FRT Episode 139: Linking Europe's AI Act with Data Regulation	In this episode of FRT, Julia Sterling, Vice President in Business Development for Big Data & Advanced Analytics at Commerzbank, discusses artificial intelligence, the EUs AI Act, and the connection to data policies  particularly cross-border policies.	2023-08-22T00:00:00	2116980	Finance Regulation Technology	Hear the latest from the IIFs experts on where the dynamic world of digital innovation in finance intersects with key regulatory and public policy considerations. Specific topics include access to innovative technologies, data sharing and protection, machine learning, cloud computing and cultural change within firms in the digital era.	Institute of International Finance	114
190	1kweD5sEnoq4HC1R1aJc5s	FRT Episode 139: Linking Europe's AI Act with Data Regulation	In this episode of FRT, Julia Sterling, Vice President in Business Development for Big Data & Advanced Analytics at Commerzbank, discusses artificial intelligence, the EUs AI Act, and the connection to data policies  particularly cross-border policies.	2023-08-22T00:00:00	2116980	Finance Regulation Technology	Hear the latest from the IIFs experts on where the dynamic world of digital innovation in finance intersects with key regulatory and public policy considerations. Specific topics include access to innovative technologies, data sharing and protection, machine learning, cloud computing and cultural change within firms in the digital era.	Institute of International Finance	114
191	335ae6nxFqB7oxmPaRpC1N	Responsible AI: Building trust, shaping policy	Text us your thoughts on this episodeWith the use of generative AI accelerating, its important to focus on how business leaders can get the most out of their tech investment in a trusted and ethical way. In this episode, we dive into responsible AI  what it is, why its important and how it can be a competitive advantage.To cover this important topic, PwCs host, Joe Atkinson, is joined by leading AI experts and members of the National AI Advisory Committee to the President and the White House  Miriam Vogel, President and CEO of EqualAI, and Ramayya Krishnan, Dean of the Heinz College Of Information Systems And Public Policy and Director of the Block Center for Technology and Society at Carnegie Mellon University.For more information on this episodes speakers, and to view the full transcript, please visit pwc.com.	2023-08-16T00:00:00	1822512	PwC Pulse - a business podcast for executives	Executives from across industries share insights to help business leaders solve their toughest business challenges. By combining the right people and technologies, theyre tackling issues like trust, talent, transformation and sustainability, and addressing external forces like geopolitical conflict, the ongoing pandemic and social injustice challenges. Listen to these conversations on PwC Pulse podcast.	PwC	22
192	2avtjGv770nO4SHYgOKCL3	AI Public Opinion: Survey Show's Distrust of Tech Firms, Demand for Regulation	  In this episode we dive into a recent survey revealing a growing mistrust of technology companies and the public's increasing demand for stricter AI regulations. Join us as we unpack the underlying reasons behind this sentiment and discuss potential implications for the tech industry's future.    Get on the AI Box Waitlist:https://AIBox.ai/ Investor Contact Email:jaeden@aibox.ai Facebook Community: https://www.facebook.com/groups/739308654562189/ Discord Community:https://aibox.ai/discord Download Selfpause:https://selfpause.com/Podcast Follow me on Twitter... er... X.com:https://twitter.com/jaeden_ai     	2023-08-11T00:00:00	803015	AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning	AI Chat is the podcast where we dive into the world of ChatGPT, cutting-edge AI news and its impact on our daily lives. With in-depth discussions and interviews with leading experts in the field, we'll explore the latest advancements in language models, machine learning, and more.   From its practical applications to its ethical considerations, AI Chat will keep you informed and entertained on the exciting developments in the world of AI. Tune in to stay ahead of the curve on the latest technological revolution.	Jaeden Schafer	719
193	4UdsAyqPq1xII9ONZEU1lw	Episode 135  Generative AI And Data Privacy - Risks And Regulation	Generative AI  ChatGPT for example. Have you considered how generative AI collects our personal information to provide its benefits in ways that can do us wrong? What can we do about the risks? How should legislators and regulators balance AIs benefits with our rights to personal privacy?Rita Garry, a Chicago attorney with the firm of Howard & Howard Attorneys, PLLC, provides data privacy and cybersecurity services with a view to the specifics of each client. Tune in to learn what Generative AI is, how it affects individual privacy, what the recently announced White House five principles for AI regulation are, and what organizations and individuals can do about generative AI.Time stamps:05:35  White Houses AI Bill of Rights14:00  Advice on how we can decide how AI uses our data	2023-08-10T00:00:00	983361	Data Privacy Detective	The internet in its blooming evolution makes personal data big business  for government, the private sector and denizens of the dark alike. The Data Privacy Detective explores how governments balance the interests of personal privacy with competing needs for public security, public health and other communal goods. It scans the globe for champions, villains, protectors and invaders of personal privacy and for the tools and technology used by individuals, business and government in the great competition between personal privacy and societal good order.  Well discuss how to guard our privacy by safeguarding the personal data we want to protect. Well aim to limit the access others can gain to your sensitive personal data while enjoying the convenience and power of smartphones, Facebook, Google, EBay, PayPal and thousands of devices and sites. Well explore how sinister forces seek to penetrate defenses to access data you dont want them to have. Well discover how companies providing us services and devices collect, use and try to exploit or safeguard our personal data.  And well keep up to date on how governments regulate personal data, including how they themselves create, use and disclose it in an effort to advance public goals in ways that vary dramatically from country to country. For the public good and personal privacy can be at odds. On one hand, governments try to deter terrorist incidents, theft, fraud and other criminal activity by accessing personal data, by collecting and analyzing health data to prevent and control disease and in other ways most people readily accept. On the other hand, many governments view personal privacy as a fundamental human right, with government as guardian of each citizens right to privacy. How authorities regulate data privacy is an ongoing balance of public and individual interests. Well report statutes, regulations, international agreements and court decisions that determine the balance in favor of one or more of the competing interests. And well explore innovative efforts to transcend government control through blockchain and other technology.  If you have ideas for interviews or stories, please email info@thedataprivacydetective.com.	Joe Dehner - Global Data Privacy Lawyer	150
194	4Nyl6D0sY5Pao3ieWO1UH8	Taming the AI Giant: How Different Countries are Shaping AI Regulations - EP 3	"The AI Optimist Debate Question 1:While AI holds immense potential in education and business, serious privacy concerns exist. AI systems need access to vast amounts of personal data to provide personalized learning and business insights.How can we ensure this data is not misused or exploited, and what safeguards should we put in place to protect individuals' privacy rights?The AI Optimist sees a future of international cooperation and collaboration, moving into possibility and out of fear.The AI pessimists see it going wrong, with each country doing its own thing. And in the current state of regulations, the pessimists are winning.What do you think? Reply to the email and comment below; lets explore how this works for people, not just governments and businesses!Given the pessimists constant focus on AI as being as dangerous as nuclear bombs, its interesting that no one has proposed not using AI in the military. Nuclear war doesnt happen based on the theory of MAD  mutually assured destruction.Do we want to release the same kind of danger with AI, which is already being abused by government, businesses, and military usage in major countries?Likely because AI misuse has already happened; if we know what we do now about how the threat of nuclear war and the spread of atomic capabilities threaten us all, why dont we focus on taming AI now..together?From Facial Recognition to Tracking Digital Activity with Multiple Sets of RulesThe first wave of AIs serious usage came with facial recognition in China, the US, and the EU, not just by governments and cameras everywhere, but with businesses doing this in all these countries without asking for permission. They just did it!While we wont explore the impact of AI-driven social algorithms on our habits, cocooning us into echo chamber silos and subjecting younger generations to this behavioral testing.Some of the following regulations finally cover societal impact, and we hope this continues because AI has been around for a while with few rules.China is the only one with a clear policy aligned with controlling the population. Yet the use in other countries is not precisely about freedom and individual rights; its also about control.The more you study AI regulation, the more you see its a game of control.The AI Conundrum How do we regulate whats already happening?Global regulation of AI poses various challenges requiring careful consideration. Yet the variety of laws, cultures, technology savvy, and governments who already abuse AI creates contradictions that no single proposal or framework can cover.Nonetheless, there are current frameworks and proposals that we can explore to address these challenges effectively. You could learn from each other.One of the significant challenges is determining common values that align with different countries, regions, and cultures. For example, China prioritizes regulations that align with socialist values, while the US, EU, and Canada focus on individual rights. Despite the existence of diverse cultures and rules, can we regulate AI effectively and navigate challenges as they emerge? Another issue is identifying the appropriate entity to regulate AI. Should it be independent audits, third-party organizations, government entities, or institutions? What is the risk? Measuring AI risk and determining what constitutes a risk means different things in different parts of the world, like China, the US, Japan, Canada, and the rest.KEY QUESTIONS FOR AI REGULATION1. How can we safeguard individuals' privacy rights while utilizing AI for personalized learning and business insights?2. Privacy concerns arise because AI systems require extensive access to personal data. What measures can be taken to prevent the misuse or exploitation of this data?3. Selling data  the US is a leader in sales of personal data, including by parts of the government (which, if they did directly, is against the law, but when buying the data from private parties, there are no rules).Before you think this is some conspiracy talk, read this from the US government in January 2022:Office of the Director of National Intelligence Senior Advisory Group Panel on Commercially Available Information (U) Report to the Director of National Intelligence 27 Defense Intelligence Agency buys data from LexisNexis; Navy buys a database of people who might be tied to sanctioned people from Sayari Analytics; FBI buys social media alerts from ZeroFox, a Cybersecurity company; Foreign entities and governments have also purchased this data with location data, social activity, proximity to shopping areas and protests, and the list goes on and on.KEY ISSUES1. When comparing the approaches of the U.S. and EU, the U.S. Algorithmic Accountability Act of 2022 proposal focuses specifically on automated processes and systems that make critical decisions.2. In contrast, the EU Artificial Intelligence Act framework applies to a broader range of AI systems. It imposes regulatory requirements corresponding to the level of risk a system poses to the public.3. Canada has no laws explicitly addressing AI like the U.S. and EU Acts. The Artificial Intelligence Act (AIA) is a proposal to address these issues and is quite restrictive like the EUs. Canada does have the Directive on Automated Decision-Making in the public sphere, which mandates an assessment of the algorithmic impact of each automated decision-making system used by a federal institution.Japan was the first G7 country to release comprehensive AI ethics guidelines in 2019, gradually prioritizing ethics and human oversight in using AI. There's an ongoing discussion regarding whether this self-regulatory model is enough or if more stringent laws are necessary to address potential harm.China is taking a solid stance on regulating the development and use of AI, focusing on ensuring technical safety and promoting innovation in government and industry. However, this approach may not prioritize the empowerment of citizens and could lead to isolation if human rights concerns are not addressed.These regulations will have a significant impact worldwide. Even though AI  in the form of Machine Learning in the early days  has been used by most countries in the past ten years, regulation is lacking.Lets dive into what exists and explore further to form your own opinions  links are provided to each of the measures where possible below.Leading the way with an Actual Privacy Framework  China China aims to lead globally in AI while mitigating risks. Regulations focus on managing data, algorithms, and application scenarios.Chinas Policy: Translation: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment)  April 2023July 14, 2023 Update to the Policy: China takes significant step in regulating generative AI services like ChatGPTThe rules will now only apply to services available to the general public in China. Technology developed in research institutions or intended for overseas users' use is exempted.The current version has also removedlanguage indicating punitive measures that had included fines as high as 100,000 yuan ($14,027) for violations.The state encourages the innovative use of generative AI in all industries and fields and supports the development of secure and trustworthy chips, software, tools, computing power, and data sources, according to thedocument announcing the rules.China also urges platforms to participate in the formulation of international rules and standards related to generative AI, it said.Still, among the key provisions is a requirement for generative AI service providers to conduct security reviews and register their algorithms with the government, if their services are capable of influencing public opinion or can mobilize the public.Chinese AI GovernanceChina is rolling out some of the world's earliest and most detailed regulations governing artificial intelligence (AI).These rules will impact how AI technology is built and deployed within China and internationally.Western PerceptionIn the West, China's regulations are often dismissed as irrelevant or seen purely through geopolitical competition to write the rules for AI.However, these deserve careful study on how they will affect Chinas AI trajectory and what they can teach policymakers worldwide about regulating the technology.This article breaks down the regulations into their partsthe terminology, key concepts, and specific requirementsand then traces those components to their roots, revealing how Chinese academics, bureaucrats, and journalists shaped the regulations. 3 Key Regulations: Chinas three most concrete and impactful regulations on algorithms and AI are its 2021 regulation on recommendation algorithms, the 2022 rules for profound synthesis (synthetically generated content), and the 2023 draft rules on generative AI.* The rules for recommendation algorithms bar excessive price discrimination and protect workers rights subject to algorithmic scheduling.* The deep synthesis regulation requires conspicuous labels on synthetically generated content.* The draft generative AI regulation requires the training data and model outputs to be true and accurate, a potentially insurmountable hurdle for AI chatbots to clear.Lessons for Policymakers: By rolling out targeted AI rules, Chinese regulators are steadily building up their bureaucratic know-how and governing capacity. Reusable regulatory tools like the algorithm registry can act as scaffolding to ease each successive regulation's construction. Key Players: The Cyberspace Administration of China (CAC) is the clear bureaucratic leader in governance to date. However, that position may grow more tenuous as the focus moves beyond the CACs core competency of online content controls. The Ministry of Science and Technology is another key player. Future of AI  in China: In the years ahead, China will continue rolling out targeted AI regulations and laying the groundwork for a capstone national AI law. Any country, company, or institution that hopes to compete against, cooperate with, or understand Chinas AI ecosystem must examine these moves closely.Pros: China's AI regulations provide a comprehensive framework for AI governance, which can be a reference for other countries. These aim to protect individuals and society from potential adverse impacts of AI, such as excessive price discrimination and worker exploitation. They promote transparency and accountability in AI development and deployment. Addressing AI safety early before harms emerge. Emphasize controlling AI instead of empowering citizens and stifling innovation with a top-down approach. China's tech industry has pushed back against some regulations like mandatory algorithm audits. But compliance is increasing as enforcement rises. With looser ethics restrictions, China could pull ahead in AI through the sheer scale of data and research. But ethical lapses could hamper global collaborationCons: The requirement for AI outputs to be ""true and accurate"" could pose significant challenges for AI developers, particularly for AI chatbots. The regulations could stifle innovation and limit the creative use of AI technologies. The regulations may be seen as a means for the Chinese government to control AI technologies and their use. Key aspects include required impact assessments before deploying high-risk AI, registering AI companies, and liability rules. Voluntary ethics principles exist. Controversial areas include broad surveillance uses of AI, blurring between voluntary and binding rules, and keeping some things opaque. As a significant AI player, China's regulations could influence global norms. But its top-down, control-focused approach differs from Western emphasis on individual rights.China is assertively regulating AI development and use primarily from government and industry perspectives.The focus is on technical safety and innovation gains rather than empowering citizens.China's regulations will shape the global landscape but may also isolate it if human rights implications still need to be addressed.Articles exploring Chinas approach to AI:Carnegie Endowment for International Peace article: ""Chinas AI Regulations and How They Get Made,Chinas New Blueprint: Regulating the Wild Wild East of AI by Shelly PalmerEU's proposed Artificial Intelligence Act: The European Commission has proposed a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). This Act is designed to regulate the use of AI across various industries and social activities. The first significant attempt to regulate AI globally. It aims to address the risks of specific AI systems while supporting innovation. Classifies AI as low/minimal, high-risk, or unacceptable. The strictest rules apply to high-risk systems like self-driving cars. Key requirements for high-risk AI: human oversight, robustness/accuracy, transparency, and provision of info to users. Premarket conformity assessments are required before high-risk AI can be used. Ongoing monitoring once operational. Controversial areas include a single definition of AI, the scope of high-risk systems, and stifling innovation with red tape. As the first significant framework, it could influence AI regulation globally. But risks being too EU-focused. The ongoing debate over the right balance between safety and innovation. The Act aims to ensure that AI systems placed on the Union market and used are safe and respect existing laws on fundamental rights and Union values. It also seeks to provide legal certainty to facilitate investment and innovation in AI, enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems, and facilitate the development of a single market for lawful, safe, and trustworthy AI applications. The Act proposes a single future-proof definition of AI. Certain particularly harmful AI practices are prohibited as contravening Union values. At the same time, specific restrictions and safeguards are presented concerning certain uses of remote biometric identification systems (like facial recognition) for law enforcement. The Act lays down a solid risk methodology to define high-risk AI systems that pose significant risks to persons' health and safety or fundamental rights. These AI systems must comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market. The Act proposes a governance system at the Member States level, building on already existing structures, and a cooperation mechanism at the Union level with the establishment of a European Artificial Intelligence Board. Additional measures are also proposed to support innovation, mainly through AI regulatory sandboxes and other efforts to reduce the regulatory burden and support Small and Medium-Sized Enterprises (SMEs) and start-ups. The Act is part of a broader comprehensive package of measures that address problems posed by the development and use of AI. It is also coherent with the Commissions overall digital strategy in its contribution to promoting technology that works for people. It is one of the three pillars of the policy orientation and objectives announced in the Communication Shaping Europe's digital future.The EU AI Act takes a precautionary approach to ensure trustworthy AI. But its breadth and requirements, like conformity assessments, raise concerns about slowing European AI innovation.Aspects like its risk-based approach could provide a model for global oversight while allowing benign AI to thrive.Pros:The Act aims to ensure that AI systems are safe and respect existing laws on fundamental rights and Union values. Protects fundamental rights, allows safer AI applications, and builds trust.  It provides legal certainty, facilitating investment and innovation in AI. It enhances governance and effectively enforces laws on fundamental rights and safety requirements applicable to AI systems. It enables the development of a single market for lawful, safe, and trustworthy AI applications.Cons: The Act may impose additional regulatory burdens on AI developers and users, specifically those with high-risk AI systems. It may limit certain AI practices, potentially stifling innovation. A burden for developers, unclear distinctions between risk categories. The Act's requirements for high-risk AI systems may be challenging for some organizations, potentially limiting their ability to develop or deploy such systems.Additional ResearchFUNDAMENTALS OF A REGULATORY SYSTEM FOR ALGORITHM-BASED PROCESSES  Expert opinion prepared on behalf of the Federation of German Consumer Organisations (Verbraucherzentrale Bundesverband)*  1/5/19Regulating AI in EU: three things you need to know and three reasons why you must know them!US Proposal: Introduction of the Algorithmic Accountability Act of 2022Source: U.S. House and Senate Reintroduce the Algorithmic Accountability Act Intended to Regulate AIOn February 3, 2022, U.S. Democratic lawmakers introduced the ""Algorithmic Accountability Act of 2022"" in both the Senate (S. 3572) and the House of Representatives (H. R. 6580).This act aims to hold organizations accountable for using algorithms and other automated systems to make critical decisions affecting individuals in the U.S.Key points:* The AAAI is a proposed law regulating the development and use of automated decision systems (ADS) in the United States.* The AAAI would require companies to assess their ADS impacts on individuals and take steps to mitigate any adverse effects.* The AAAI would also give individuals the right to access and correct information about themselves used in an ADS.* Controversial areas:* Some have criticized the AAAI for being too burdensome, while others have argued that it does not go far enough.* One of the most controversial aspects of the AAAI is the definition of an ""automated decision system.""* The AAAI defines ADS as any system that ""uses algorithms or other automated processes to make decisions that have a significant impact on individuals."" However, the specific criteria for determining whether a system is an ADS need to be clarified.Purpose of the Act: The U.S. Act intends to increase transparency over how algorithms and automated systems are used in decision-making contexts to reduce discriminatory, biased, or harmful outcomes.Covered Entities and Key Definitions: The U.S. Act applies to businesses under its definition of ""covered entities.These can be divided into two broad categories:(i) businesses that deploy ""augmented critical decision processes"" (ACDP); and(ii) businesses that deploy ""automated decision systems"" (ADS), which are then used by the first category of companies in an ACDP.Impact Assessments: The U.S. Act will require the Federal Trade Commission (FTC) to circulate regulations that require covered entities to perform impact assessments of any deployed ACDP or any deployed ADS developed for use by a covered entity of the first category in an ACDP.Content of the Impact Assessment: While the FTC still needs to define the precise form and content of impact assessments, the U.S. Act already provides a long list of action items for covered entities to carry out when conducting them.Pros: The AAAI could have several benefits for the AI industry, such as:* Increased public trust in AI systems.* Improved compliance with data protection and privacy laws.* Reduced risk of negative publicity or legal action. The U.S. Act aims to increase transparency and accountability using automated decision-making systems, which can help reduce discriminatory or harmful outcomes. The Act could serve as a model for other countries in developing their regulations for AI and automated decision-making systems.Cons: The Act could impose significant compliance burdens on small and medium-sized businesses. The Act's focus on ""critical decisions"" may limit its applicability and leave specific uses of AI and automated decision-making systems unregulated. The Act leaves much to be decided by the FTC, which could lead to uncertainty for businesses regarding compliance requirements.Here are some additional resources:* The Algorithmic Accountability Act of 2022 (AAAI): https://www.congress.gov/bill/117th-congress/senate-bill/3572* The AI Now Institute: https://ainowinstitute.org/* The Center for Data Innovation: https://www.datainnovation.org/ Wyden, Paul and Bipartisan Members of Congress Introduce The Fourth Amendment Is Not For Sale ActThe Fourth Amendment is Not for Sale Act closes the legal loophole that allows data brokers to sell Americans personal information to law enforcement and intelligence agencies without any court oversight  in contrast to the strict rules for phone companies, social media sites, and other businesses that have direct relationships with consumers.Doing business online doesnt amount to giving the government permission to track your every movement or rifle through the most personal details of your life, Wyden said. Theres no reason information scavenged by data brokers should be treated differently than the same data held by your phone company or email provider. This bill closes that legal loophole and ensures that the government cant use its credit card to end-run the Fourth Amendment.AI Regulation in Japan Proposal Human-centric AIThe critical points about Japan's proposed approach to AI:Source: AI Governance in Japan Ver. 1.1 Japan was the first G7 country to release comprehensive AI ethics guidelines in 2019, focusing on transparency, fairness, privacy, human control, and accountability. Japan aims to balance innovation and regulation, taking an ""ethics by design"" approach that encourages voluntary industry adoption of ethical principles. Key aspects of Japan's strategy include certification systems, sandbox regulatory environments to test AI, and incorporating ethics into the school curriculum. Controversial areas include handling China's advances in AI amid rising tech competition and debate over whether guidelines should become legally binding. As G7 president in 2023, Japan will likely promote its vision of ""human-centric AI"" and Ethics by Design globally but faces challenges reconciling different regulatory approaches across countries.Japan is taking an incremental approach focused on ethics and human control of AI. An ongoing debate exists about whether this self-regulatory model is sufficient or if stricter laws are needed as harms emerge. AI governance is an urgent issue that requires the knowledge and experience of experts from various fields. Weak AI has reached the practical application stage, and Japan uses the term AI to mean Weak AI, markedly in the academic discipline related to machine learning.AI Governance Trends in JapanThe discussion on AI governance is shifting from AI principles to AI governance that carries out or puts into operation AI principles in society.A risk-based approach is taken, where the degree of regulatory intervention should be proportionate to the impact of risks.AI governance requires multi-stakeholder engagement, and the discussion must consider diversified views.Japan proposes a shift from rule-based regulations that specify detailed duties of conduct to goal-based rules that ultimately specify the value to be attained.Pros Provides a comprehensive overview of AI governance in Japan, which can serve as a reference for other countries. The risk-based approach to AI governance ensures that regulatory intervention is equal to the impact of risks, which can prevent over-regulation and promote innovation. Measured pace, collaboration with industry, and emphasis on ethics education.Cons Designing actual AI governance is not straightforward due to government control's complexity and multi-layered nature. The voluntary nature of principles and the potential lag behind regulating specific harms. Does not provide specific solutions to the issues raised but proposes a general framework for AI governance.Canada: The Artificial Intelligence Act (AIA)The Artificial Intelligence and Data Act is part of Bill C-27, also known as the Digital Charter Implementation Act, 2022, tabled in the House of Commons on November 2022.  * The AIA is a proposed law regulating the development and use of artificial intelligence (AI) systems in Canada.* The AIA would create a risk-based framework for regulating AI systems, with higher levels of oversight for systems that pose more significant risks to individuals and society.* The AIA would require AI systems to be designed and developed in a way that respects human rights, is transparent, and is accountable.The Act aims to regulate international and interprovincial trade and commerce in artificial intelligence systems.It establishes requirements for designing, developing, and using AI systems, including measures to moderate harm and biased output risks.It also prohibits specific practices with data and AI systems that may seriously harm individuals or their interests.The AIA is part of a more considerable legislative effort that includes the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act. These acts aim to modernize and extend existing rules on collecting, using, and disclosing personal information for commercial activity in Canada. The Consumer Privacy Protection Act would also enhance the role of the Privacy Commissioner in overseeing organizations compliance with these measures. The Personal Information and Data Protection Tribunal Act would create a new administrative tribunal to hear appeals of orders issued by the Privacy Commissioner and apply a new administrative monetary penalty regime created under the Consumer Privacy Protection Act.The AIA may implicate rights under section 8 of the Charter, protecting against unreasonable searches and seizures.The Privacy Commissioners powers and specific provisions allowing government institutions access to personal information may involve information subject to a reasonable expectation of privacy.The AIA may also impact freedom of expression as restrictions on collecting, using, and disclosing personal information could affect commercial expressive activities.Includes provisions for administrative monetary penalties and offenses for failing to comply with specific regulatory requirements. These offenses would be punishable by way of fine or imprisonment.It is intended to provide legal information to the public and Parliament on a bills potential effects on rights and freedoms that are neither trivial nor too speculative. It is not intended to be a comprehensive overview of all conceivable considerations.* Controversial areas:* Some have criticized the AIA for being too restrictive, while others have argued that it does not go far enough.* One of the most controversial aspects of the AIA is the definition of ""high-risk"" AI systems.* The AIA defines high-risk AI systems as those that pose a significant risk to individuals or society. Still, the criteria for determining whether an AI system is high-risk are unclear.ProsThe AIA could have several benefits for the AI industry, such as:* Increasing the trust people have in AI systems.* Could you make sure companies comply with data protection and privacy laws?* Clear boundaries to avoid legal snafus and bad press.ConsThe AIA could also have some drawbacks for the AI industry, such as:* Increasing costs to follow the regulations, limiting growth.* Delays in the development and deployment of AI systems.* Reducing AI innovation with stringent rules and fines.The AIA is a complex piece of legislation that is still under development. It remains to be seen how the AIA will be implemented and enforced and its ultimate impact on the AI industry.Additional Resources* The Artificial Intelligence and Data Act (AIDA)  Companion document: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document* Responsible use of artificial intelligence (AI): https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.htmlCONCLUSION - Taming the AI GiantEach framework and proposal is based on the country or region of origin, so how can this be applied worldwide? Or at least create standards that we all follow?Almost all regulation would inhibit innovation and be costly, with compliance based on a slow-moving political system that the rapid growth, use, and scaling of AI can quickly leave behind.Most of all, especially concerning China, the US, and the EU, governments, and businesses have already broken these rules, gathered data, and used it. While regulation cannot be retroactive, how do we minimize the damage and stop the negative AI from happening?The regulation of AI needs to catch up to its usage of it, and without rules, countries and businesses have decided to jump in without asking for permission.Remember that AI didnt start with ChatGPT in November 2022; the algorithms and machine learning have been around for over a decade. The data gathered has already been used for commercial and governmental gain without rules.And these are the same institutions that plan on regulating it now. They rarely refer to historical usage of AI or abuses, only focusing on control at a country level and between countries.In the next pod, well explore some of these impacts, what they mean, and how your privacy is not some political ideal; its a right.at least outside of China, which ironically has the most transparent and actionable framework of any nation so far.|What do you think? Share a comment! To hear more, visit www.theaioptimist.com"	2023-08-04T00:00:00	1579128	The AI Optimist	Moving beyond AI hype, The AI Optimist explores how we can use AI to our advantage, how not to be left behind, and what's essential for business and education going forward.  Each week for one year Im exploring the possibilities of AI, against the drawbacks. Diving into regulations and the top 10 questions posed by AI Pessimists, Im not here to prove Im right. The purpose here is to engage in discussions with both sides, hear out what we fear and what we hope for, and help design AI models that benefit us all.   . www.theaioptimist.com	The AI Optimist with Declan Dunn - AI People First	46
195	7gAxhiVEVluW95qoHD3Spd	Taming the AI Giant: How Different Countries are Shaping AI Regulations - EP 3	"The AI Optimist Debate Question 1:While AI holds immense potential in education and business, serious privacy concerns exist. AI systems need access to vast amounts of personal data to provide personalized learning and business insights.How can we ensure this data is not misused or exploited, and what safeguards should we put in place to protect individuals' privacy rights?The AI Optimist sees a future of international cooperation and collaboration, moving into possibility and out of fear.The AI pessimists see it going wrong, with each country doing its own thing. And in the current state of regulations, the pessimists are winning.What do you think? Reply to the email and comment below; lets explore how this works for people, not just governments and businesses!Given the pessimists constant focus on AI as being as dangerous as nuclear bombs, its interesting that no one has proposed not using AI in the military. Nuclear war doesnt happen based on the theory of MAD  mutually assured destruction.Do we want to release the same kind of danger with AI, which is already being abused by government, businesses, and military usage in major countries?Likely because AI misuse has already happened; if we know what we do now about how the threat of nuclear war and the spread of atomic capabilities threaten us all, why dont we focus on taming AI now..together?From Facial Recognition to Tracking Digital Activity with Multiple Sets of RulesThe first wave of AIs serious usage came with facial recognition in China, the US, and the EU, not just by governments and cameras everywhere, but with businesses doing this in all these countries without asking for permission. They just did it!While we wont explore the impact of AI-driven social algorithms on our habits, cocooning us into echo chamber silos and subjecting younger generations to this behavioral testing.Some of the following regulations finally cover societal impact, and we hope this continues because AI has been around for a while with few rules.China is the only one with a clear policy aligned with controlling the population. Yet the use in other countries is not precisely about freedom and individual rights; its also about control.The more you study AI regulation, the more you see its a game of control.The AI Conundrum How do we regulate whats already happening?Global regulation of AI poses various challenges requiring careful consideration. Yet the variety of laws, cultures, technology savvy, and governments who already abuse AI creates contradictions that no single proposal or framework can cover.Nonetheless, there are current frameworks and proposals that we can explore to address these challenges effectively. You could learn from each other.One of the significant challenges is determining common values that align with different countries, regions, and cultures. For example, China prioritizes regulations that align with socialist values, while the US, EU, and Canada focus on individual rights. Despite the existence of diverse cultures and rules, can we regulate AI effectively and navigate challenges as they emerge? Another issue is identifying the appropriate entity to regulate AI. Should it be independent audits, third-party organizations, government entities, or institutions? What is the risk? Measuring AI risk and determining what constitutes a risk means different things in different parts of the world, like China, the US, Japan, Canada, and the rest.KEY QUESTIONS FOR AI REGULATION1. How can we safeguard individuals' privacy rights while utilizing AI for personalized learning and business insights?2. Privacy concerns arise because AI systems require extensive access to personal data. What measures can be taken to prevent the misuse or exploitation of this data?3. Selling data  the US is a leader in sales of personal data, including by parts of the government (which, if they did directly, is against the law, but when buying the data from private parties, there are no rules).Before you think this is some conspiracy talk, read this from the US government in January 2022:Office of the Director of National Intelligence Senior Advisory Group Panel on Commercially Available Information (U) Report to the Director of National Intelligence 27 Defense Intelligence Agency buys data from LexisNexis; Navy buys a database of people who might be tied to sanctioned people from Sayari Analytics; FBI buys social media alerts from ZeroFox, a Cybersecurity company; Foreign entities and governments have also purchased this data with location data, social activity, proximity to shopping areas and protests, and the list goes on and on.KEY ISSUES1. When comparing the approaches of the U.S. and EU, the U.S. Algorithmic Accountability Act of 2022 proposal focuses specifically on automated processes and systems that make critical decisions.2. In contrast, the EU Artificial Intelligence Act framework applies to a broader range of AI systems. It imposes regulatory requirements corresponding to the level of risk a system poses to the public.3. Canada has no laws explicitly addressing AI like the U.S. and EU Acts. The Artificial Intelligence Act (AIA) is a proposal to address these issues and is quite restrictive like the EUs. Canada does have the Directive on Automated Decision-Making in the public sphere, which mandates an assessment of the algorithmic impact of each automated decision-making system used by a federal institution.Japan was the first G7 country to release comprehensive AI ethics guidelines in 2019, gradually prioritizing ethics and human oversight in using AI. There's an ongoing discussion regarding whether this self-regulatory model is enough or if more stringent laws are necessary to address potential harm.China is taking a solid stance on regulating the development and use of AI, focusing on ensuring technical safety and promoting innovation in government and industry. However, this approach may not prioritize the empowerment of citizens and could lead to isolation if human rights concerns are not addressed.These regulations will have a significant impact worldwide. Even though AI  in the form of Machine Learning in the early days  has been used by most countries in the past ten years, regulation is lacking.Lets dive into what exists and explore further to form your own opinions  links are provided to each of the measures where possible below.Leading the way with an Actual Privacy Framework  China China aims to lead globally in AI while mitigating risks. Regulations focus on managing data, algorithms, and application scenarios.Chinas Policy: Translation: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment)  April 2023July 14, 2023 Update to the Policy: China takes significant step in regulating generative AI services like ChatGPTThe rules will now only apply to services available to the general public in China. Technology developed in research institutions or intended for overseas users' use is exempted.The current version has also removedlanguage indicating punitive measures that had included fines as high as 100,000 yuan ($14,027) for violations.The state encourages the innovative use of generative AI in all industries and fields and supports the development of secure and trustworthy chips, software, tools, computing power, and data sources, according to thedocument announcing the rules.China also urges platforms to participate in the formulation of international rules and standards related to generative AI, it said.Still, among the key provisions is a requirement for generative AI service providers to conduct security reviews and register their algorithms with the government, if their services are capable of influencing public opinion or can mobilize the public.Chinese AI GovernanceChina is rolling out some of the world's earliest and most detailed regulations governing artificial intelligence (AI).These rules will impact how AI technology is built and deployed within China and internationally.Western PerceptionIn the West, China's regulations are often dismissed as irrelevant or seen purely through geopolitical competition to write the rules for AI.However, these deserve careful study on how they will affect Chinas AI trajectory and what they can teach policymakers worldwide about regulating the technology.This article breaks down the regulations into their partsthe terminology, key concepts, and specific requirementsand then traces those components to their roots, revealing how Chinese academics, bureaucrats, and journalists shaped the regulations. 3 Key Regulations: Chinas three most concrete and impactful regulations on algorithms and AI are its 2021 regulation on recommendation algorithms, the 2022 rules for profound synthesis (synthetically generated content), and the 2023 draft rules on generative AI.* The rules for recommendation algorithms bar excessive price discrimination and protect workers rights subject to algorithmic scheduling.* The deep synthesis regulation requires conspicuous labels on synthetically generated content.* The draft generative AI regulation requires the training data and model outputs to be true and accurate, a potentially insurmountable hurdle for AI chatbots to clear.Lessons for Policymakers: By rolling out targeted AI rules, Chinese regulators are steadily building up their bureaucratic know-how and governing capacity. Reusable regulatory tools like the algorithm registry can act as scaffolding to ease each successive regulation's construction. Key Players: The Cyberspace Administration of China (CAC) is the clear bureaucratic leader in governance to date. However, that position may grow more tenuous as the focus moves beyond the CACs core competency of online content controls. The Ministry of Science and Technology is another key player. Future of AI  in China: In the years ahead, China will continue rolling out targeted AI regulations and laying the groundwork for a capstone national AI law. Any country, company, or institution that hopes to compete against, cooperate with, or understand Chinas AI ecosystem must examine these moves closely.Pros: China's AI regulations provide a comprehensive framework for AI governance, which can be a reference for other countries. These aim to protect individuals and society from potential adverse impacts of AI, such as excessive price discrimination and worker exploitation. They promote transparency and accountability in AI development and deployment. Addressing AI safety early before harms emerge. Emphasize controlling AI instead of empowering citizens and stifling innovation with a top-down approach. China's tech industry has pushed back against some regulations like mandatory algorithm audits. But compliance is increasing as enforcement rises. With looser ethics restrictions, China could pull ahead in AI through the sheer scale of data and research. But ethical lapses could hamper global collaborationCons: The requirement for AI outputs to be ""true and accurate"" could pose significant challenges for AI developers, particularly for AI chatbots. The regulations could stifle innovation and limit the creative use of AI technologies. The regulations may be seen as a means for the Chinese government to control AI technologies and their use. Key aspects include required impact assessments before deploying high-risk AI, registering AI companies, and liability rules. Voluntary ethics principles exist. Controversial areas include broad surveillance uses of AI, blurring between voluntary and binding rules, and keeping some things opaque. As a significant AI player, China's regulations could influence global norms. But its top-down, control-focused approach differs from Western emphasis on individual rights.China is assertively regulating AI development and use primarily from government and industry perspectives.The focus is on technical safety and innovation gains rather than empowering citizens.China's regulations will shape the global landscape but may also isolate it if human rights implications still need to be addressed.Articles exploring Chinas approach to AI:Carnegie Endowment for International Peace article: ""Chinas AI Regulations and How They Get Made,Chinas New Blueprint: Regulating the Wild Wild East of AI by Shelly PalmerEU's proposed Artificial Intelligence Act: The European Commission has proposed a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). This Act is designed to regulate the use of AI across various industries and social activities. The first significant attempt to regulate AI globally. It aims to address the risks of specific AI systems while supporting innovation. Classifies AI as low/minimal, high-risk, or unacceptable. The strictest rules apply to high-risk systems like self-driving cars. Key requirements for high-risk AI: human oversight, robustness/accuracy, transparency, and provision of info to users. Premarket conformity assessments are required before high-risk AI can be used. Ongoing monitoring once operational. Controversial areas include a single definition of AI, the scope of high-risk systems, and stifling innovation with red tape. As the first significant framework, it could influence AI regulation globally. But risks being too EU-focused. The ongoing debate over the right balance between safety and innovation. The Act aims to ensure that AI systems placed on the Union market and used are safe and respect existing laws on fundamental rights and Union values. It also seeks to provide legal certainty to facilitate investment and innovation in AI, enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems, and facilitate the development of a single market for lawful, safe, and trustworthy AI applications. The Act proposes a single future-proof definition of AI. Certain particularly harmful AI practices are prohibited as contravening Union values. At the same time, specific restrictions and safeguards are presented concerning certain uses of remote biometric identification systems (like facial recognition) for law enforcement. The Act lays down a solid risk methodology to define high-risk AI systems that pose significant risks to persons' health and safety or fundamental rights. These AI systems must comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before those systems can be placed on the Union market. The Act proposes a governance system at the Member States level, building on already existing structures, and a cooperation mechanism at the Union level with the establishment of a European Artificial Intelligence Board. Additional measures are also proposed to support innovation, mainly through AI regulatory sandboxes and other efforts to reduce the regulatory burden and support Small and Medium-Sized Enterprises (SMEs) and start-ups. The Act is part of a broader comprehensive package of measures that address problems posed by the development and use of AI. It is also coherent with the Commissions overall digital strategy in its contribution to promoting technology that works for people. It is one of the three pillars of the policy orientation and objectives announced in the Communication Shaping Europe's digital future.The EU AI Act takes a precautionary approach to ensure trustworthy AI. But its breadth and requirements, like conformity assessments, raise concerns about slowing European AI innovation.Aspects like its risk-based approach could provide a model for global oversight while allowing benign AI to thrive.Pros:The Act aims to ensure that AI systems are safe and respect existing laws on fundamental rights and Union values. Protects fundamental rights, allows safer AI applications, and builds trust.  It provides legal certainty, facilitating investment and innovation in AI. It enhances governance and effectively enforces laws on fundamental rights and safety requirements applicable to AI systems. It enables the development of a single market for lawful, safe, and trustworthy AI applications.Cons: The Act may impose additional regulatory burdens on AI developers and users, specifically those with high-risk AI systems. It may limit certain AI practices, potentially stifling innovation. A burden for developers, unclear distinctions between risk categories. The Act's requirements for high-risk AI systems may be challenging for some organizations, potentially limiting their ability to develop or deploy such systems.Additional ResearchFUNDAMENTALS OF A REGULATORY SYSTEM FOR ALGORITHM-BASED PROCESSES  Expert opinion prepared on behalf of the Federation of German Consumer Organisations (Verbraucherzentrale Bundesverband)*  1/5/19Regulating AI in EU: three things you need to know and three reasons why you must know them!US Proposal: Introduction of the Algorithmic Accountability Act of 2022Source: U.S. House and Senate Reintroduce the Algorithmic Accountability Act Intended to Regulate AIOn February 3, 2022, U.S. Democratic lawmakers introduced the ""Algorithmic Accountability Act of 2022"" in both the Senate (S. 3572) and the House of Representatives (H. R. 6580).This act aims to hold organizations accountable for using algorithms and other automated systems to make critical decisions affecting individuals in the U.S.Key points:* The AAAI is a proposed law regulating the development and use of automated decision systems (ADS) in the United States.* The AAAI would require companies to assess their ADS impacts on individuals and take steps to mitigate any adverse effects.* The AAAI would also give individuals the right to access and correct information about themselves used in an ADS.* Controversial areas:* Some have criticized the AAAI for being too burdensome, while others have argued that it does not go far enough.* One of the most controversial aspects of the AAAI is the definition of an ""automated decision system.""* The AAAI defines ADS as any system that ""uses algorithms or other automated processes to make decisions that have a significant impact on individuals."" However, the specific criteria for determining whether a system is an ADS need to be clarified.Purpose of the Act: The U.S. Act intends to increase transparency over how algorithms and automated systems are used in decision-making contexts to reduce discriminatory, biased, or harmful outcomes.Covered Entities and Key Definitions: The U.S. Act applies to businesses under its definition of ""covered entities.These can be divided into two broad categories:(i) businesses that deploy ""augmented critical decision processes"" (ACDP); and(ii) businesses that deploy ""automated decision systems"" (ADS), which are then used by the first category of companies in an ACDP.Impact Assessments: The U.S. Act will require the Federal Trade Commission (FTC) to circulate regulations that require covered entities to perform impact assessments of any deployed ACDP or any deployed ADS developed for use by a covered entity of the first category in an ACDP.Content of the Impact Assessment: While the FTC still needs to define the precise form and content of impact assessments, the U.S. Act already provides a long list of action items for covered entities to carry out when conducting them.Pros: The AAAI could have several benefits for the AI industry, such as:* Increased public trust in AI systems.* Improved compliance with data protection and privacy laws.* Reduced risk of negative publicity or legal action. The U.S. Act aims to increase transparency and accountability using automated decision-making systems, which can help reduce discriminatory or harmful outcomes. The Act could serve as a model for other countries in developing their regulations for AI and automated decision-making systems.Cons: The Act could impose significant compliance burdens on small and medium-sized businesses. The Act's focus on ""critical decisions"" may limit its applicability and leave specific uses of AI and automated decision-making systems unregulated. The Act leaves much to be decided by the FTC, which could lead to uncertainty for businesses regarding compliance requirements.Here are some additional resources:* The Algorithmic Accountability Act of 2022 (AAAI): https://www.congress.gov/bill/117th-congress/senate-bill/3572* The AI Now Institute: https://ainowinstitute.org/* The Center for Data Innovation: https://www.datainnovation.org/ Wyden, Paul and Bipartisan Members of Congress Introduce The Fourth Amendment Is Not For Sale ActThe Fourth Amendment is Not for Sale Act closes the legal loophole that allows data brokers to sell Americans personal information to law enforcement and intelligence agencies without any court oversight  in contrast to the strict rules for phone companies, social media sites, and other businesses that have direct relationships with consumers.Doing business online doesnt amount to giving the government permission to track your every movement or rifle through the most personal details of your life, Wyden said. Theres no reason information scavenged by data brokers should be treated differently than the same data held by your phone company or email provider. This bill closes that legal loophole and ensures that the government cant use its credit card to end-run the Fourth Amendment.AI Regulation in Japan Proposal Human-centric AIThe critical points about Japan's proposed approach to AI:Source: AI Governance in Japan Ver. 1.1 Japan was the first G7 country to release comprehensive AI ethics guidelines in 2019, focusing on transparency, fairness, privacy, human control, and accountability. Japan aims to balance innovation and regulation, taking an ""ethics by design"" approach that encourages voluntary industry adoption of ethical principles. Key aspects of Japan's strategy include certification systems, sandbox regulatory environments to test AI, and incorporating ethics into the school curriculum. Controversial areas include handling China's advances in AI amid rising tech competition and debate over whether guidelines should become legally binding. As G7 president in 2023, Japan will likely promote its vision of ""human-centric AI"" and Ethics by Design globally but faces challenges reconciling different regulatory approaches across countries.Japan is taking an incremental approach focused on ethics and human control of AI. An ongoing debate exists about whether this self-regulatory model is sufficient or if stricter laws are needed as harms emerge. AI governance is an urgent issue that requires the knowledge and experience of experts from various fields. Weak AI has reached the practical application stage, and Japan uses the term AI to mean Weak AI, markedly in the academic discipline related to machine learning.AI Governance Trends in JapanThe discussion on AI governance is shifting from AI principles to AI governance that carries out or puts into operation AI principles in society.A risk-based approach is taken, where the degree of regulatory intervention should be proportionate to the impact of risks.AI governance requires multi-stakeholder engagement, and the discussion must consider diversified views.Japan proposes a shift from rule-based regulations that specify detailed duties of conduct to goal-based rules that ultimately specify the value to be attained.Pros Provides a comprehensive overview of AI governance in Japan, which can serve as a reference for other countries. The risk-based approach to AI governance ensures that regulatory intervention is equal to the impact of risks, which can prevent over-regulation and promote innovation. Measured pace, collaboration with industry, and emphasis on ethics education.Cons Designing actual AI governance is not straightforward due to government control's complexity and multi-layered nature. The voluntary nature of principles and the potential lag behind regulating specific harms. Does not provide specific solutions to the issues raised but proposes a general framework for AI governance.Canada: The Artificial Intelligence Act (AIA)The Artificial Intelligence and Data Act is part of Bill C-27, also known as the Digital Charter Implementation Act, 2022, tabled in the House of Commons on November 2022.  * The AIA is a proposed law regulating the development and use of artificial intelligence (AI) systems in Canada.* The AIA would create a risk-based framework for regulating AI systems, with higher levels of oversight for systems that pose more significant risks to individuals and society.* The AIA would require AI systems to be designed and developed in a way that respects human rights, is transparent, and is accountable.The Act aims to regulate international and interprovincial trade and commerce in artificial intelligence systems.It establishes requirements for designing, developing, and using AI systems, including measures to moderate harm and biased output risks.It also prohibits specific practices with data and AI systems that may seriously harm individuals or their interests.The AIA is part of a more considerable legislative effort that includes the Consumer Privacy Protection Act and the Personal Information and Data Protection Tribunal Act. These acts aim to modernize and extend existing rules on collecting, using, and disclosing personal information for commercial activity in Canada. The Consumer Privacy Protection Act would also enhance the role of the Privacy Commissioner in overseeing organizations compliance with these measures. The Personal Information and Data Protection Tribunal Act would create a new administrative tribunal to hear appeals of orders issued by the Privacy Commissioner and apply a new administrative monetary penalty regime created under the Consumer Privacy Protection Act.The AIA may implicate rights under section 8 of the Charter, protecting against unreasonable searches and seizures.The Privacy Commissioners powers and specific provisions allowing government institutions access to personal information may involve information subject to a reasonable expectation of privacy.The AIA may also impact freedom of expression as restrictions on collecting, using, and disclosing personal information could affect commercial expressive activities.Includes provisions for administrative monetary penalties and offenses for failing to comply with specific regulatory requirements. These offenses would be punishable by way of fine or imprisonment.It is intended to provide legal information to the public and Parliament on a bills potential effects on rights and freedoms that are neither trivial nor too speculative. It is not intended to be a comprehensive overview of all conceivable considerations.* Controversial areas:* Some have criticized the AIA for being too restrictive, while others have argued that it does not go far enough.* One of the most controversial aspects of the AIA is the definition of ""high-risk"" AI systems.* The AIA defines high-risk AI systems as those that pose a significant risk to individuals or society. Still, the criteria for determining whether an AI system is high-risk are unclear.ProsThe AIA could have several benefits for the AI industry, such as:* Increasing the trust people have in AI systems.* Could you make sure companies comply with data protection and privacy laws?* Clear boundaries to avoid legal snafus and bad press.ConsThe AIA could also have some drawbacks for the AI industry, such as:* Increasing costs to follow the regulations, limiting growth.* Delays in the development and deployment of AI systems.* Reducing AI innovation with stringent rules and fines.The AIA is a complex piece of legislation that is still under development. It remains to be seen how the AIA will be implemented and enforced and its ultimate impact on the AI industry.Additional Resources* The Artificial Intelligence and Data Act (AIDA)  Companion document: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document* Responsible use of artificial intelligence (AI): https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.htmlCONCLUSION - Taming the AI GiantEach framework and proposal is based on the country or region of origin, so how can this be applied worldwide? Or at least create standards that we all follow?Almost all regulation would inhibit innovation and be costly, with compliance based on a slow-moving political system that the rapid growth, use, and scaling of AI can quickly leave behind.Most of all, especially concerning China, the US, and the EU, governments, and businesses have already broken these rules, gathered data, and used it. While regulation cannot be retroactive, how do we minimize the damage and stop the negative AI from happening?The regulation of AI needs to catch up to its usage of it, and without rules, countries and businesses have decided to jump in without asking for permission.Remember that AI didnt start with ChatGPT in November 2022; the algorithms and machine learning have been around for over a decade. The data gathered has already been used for commercial and governmental gain without rules.And these are the same institutions that plan on regulating it now. They rarely refer to historical usage of AI or abuses, only focusing on control at a country level and between countries.In the next pod, well explore some of these impacts, what they mean, and how your privacy is not some political ideal; its a right.at least outside of China, which ironically has the most transparent and actionable framework of any nation so far.|What do you think? Share a comment! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.theaioptimist.com"	2023-08-04T00:00:00	1579128	The AI Optimist	Moving beyond AI hype, The AI Optimist explores how we can use AI to our advantage, how not to be left behind, and what's essential for business and education going forward.  Each week for one year Im exploring the possibilities of AI, against the drawbacks. Diving into regulations and the top 10 questions posed by AI Pessimists, Im not here to prove Im right. The purpose here is to engage in discussions with both sides, hear out what we fear and what we hope for, and help design AI models that benefit us all.   . www.theaioptimist.com	The AI Optimist with Declan Dunn - AI People First	46
196	6Y1ZniNPpSyhTFLp0D9za7	Oversight of A.I.: Principles for Regulation (Part 2/4)	(Music by Karl Casey @ White Bat Audio)TL;DLWere back with another DH and AP Reacts podcast, this time covering the Oversight of A.I.: Principles for Regulation held by the U.S. Senate Committee of the Judiciary on Tuesday, July 25th, 2023.This episode is part two of this listen/reaction podcast, covering questions from Mr. Blumenthal, Mr. Hawley, and Ms. Klobuchar. As before, the witness panel includes:* Dario Amodei (CEO of Anthropic, an OpenAI competitor and owner of the Claude2 AI Chatbot)* Yoshua Bengio (AI Researcher, Professor in the Department of Computer Science and Operations Research at Universit de Montral)* Stuart Russell (AI Researcher, Professor of Computer ScienceThe University of California, Berkeley)The hearing is worth watching/listening to even if you dont care for our commentary - lots of good information to consider and discuss in your circles. You can watch the original recording in the link above, or on any of a number Youtube videos like the one embedded below:Looking for Part 1? Start here: Get full access to Digital Heresy at www.digitalheresy.com/subscribe	2023-08-03T00:00:00	3506468	Digital Heresy Podcast	A podcast about Artificial Intelligence for the enthusiast floundering between excitement and existential dread. The podcast is an extension of the Digital Heresy Substack, where we cover topics in deeper philosophical detail.   Music by Karl Casey @ White Bat Audio www.digitalheresy.com	Digital Heretic	10
197	5KDA9gMdPvUgUSMzlJp5uK	Oversight of A.I.: Principles for Regulation (Part 1/4)	(Music by Karl Casey @ White Bat Audio)TL;DLWere back with another DH and AP Reacts podcast, this time covering the Oversight of A.I.: Principles for Regulation held by the U.S. Senate Committee of the Judiciary on Tuesday, July 25th, 2023.This is the second of such hearings hosted by Chair Blumenthal. Digital Heresy also covered the first hearing, which you can watch/listen to here: For this second hearing, the Senate is back, but this time with a new batch of witnesses, including: * Dario Amodei (CEO of Anthropic, an OpenAI competitor and owner of the Claude2 AI Chatbot)* Yoshua Bengio (AI Researcher, Professor in the Department of Computer Science and Operations Research at Universit de Montral)* Stuart Russell (AI Researcher, Professor of Computer Science The University of California, Berkeley)This episode is another live listen and react featuring commentary from myself, and A Priori. The overall hearing is over 2 hours long, so this is part one of four, and coves the first hour of the hearing through the opening statements from the panel and the witnesses. The hearing is worth watching/listening to even if you dont care for our commentary - lots of good information to consider and discuss in your circles. You can watch the original recording in the link above, or on any of a number Youtube videos like the one embedded below: Get full access to Digital Heresy at www.digitalheresy.com/subscribe	2023-08-02T00:00:00	4642272	Digital Heresy Podcast	A podcast about Artificial Intelligence for the enthusiast floundering between excitement and existential dread. The podcast is an extension of the Digital Heresy Substack, where we cover topics in deeper philosophical detail.   Music by Karl Casey @ White Bat Audio www.digitalheresy.com	Digital Heretic	10
198	6wIEWQ9a1cRPaoxNG9IGf7	AI Today Podcast: Trustworthy AI Series: Why are trustworthy, ethical and responsible AI systems necessary?	For anyone who has used or interacted with an AI system you know that trust is required for AI systems if you want them to deliver any meaningful benefit. And once trust is lost, its almost impossible to gain back. Therefore, making Trustworthy, ethical & responsible AI  a reality is not just a policy statement or a press release. Continue reading AI Today Podcast: Trustworthy AI Series: Why are trustworthy, ethical and responsible AI systems necessary? at Cognilytica.	2023-08-02T00:00:00	1505811	AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion	AI Today Podcast	AI & Data Today	447
199	4Q0LPqZxVWk0dxoiam97B8	Season 3: Episode 6 - AI in Dispute Resolution: Benefits, Dangers and the Future of Regulation	This is the sixth episode in the third series of the Fountain Court Podcast. The episode is hosted by Jacob Turner, a barrister at Fountain Court, and is a recording of an in-person event hosted by Fountain Court in June 2023. The event was the first in the Fountain Court Academy series, a programme of events conceived, hosted and managed by juniors at Fountain Court Chambers.   This first event focused on the use of artificial intelligence in dispute resolution, both now and in the future, and featured a panel of speakers that allowed for a variety of viewpoints across the legal and regulatory spectrum.  Jacob is a junior barrister with a particular focus on AI-related matters. He authored Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan, 2018) and contributed to The Law of Artificial Intelligence (Sweet & Maxwell, 2020), and has acted in various precedent-setting disputes in the sector.  Joining Jacob Turner in the discussion are:  Sir Marcus Smith, a Judge in the High Court Chancery Division and Chair of the Competition Appeal Tribunal who has presided over several cases which involve algorithms and AI.   Sana Khareghani, former Head of the UK Government Office for AI and now a Professor of Practice in AI at Kings College London and one of the most sought-after speakers internationally on the topic of the regulation of AI.  Andrew Denny, partner at Allen & Overy who leads the firms global Business & Human Rights practice and is a member of its AI Steering Group as well as an active participant in its trial of Harvey AI.   Patricia Shaw, founder and CEO of Beyond Reach Consulting and an international advisor on tech ethics policy, governance and regulation as well as an experienced lawyer who has held positions in various AI associations.  During the session, the speakers discussed the definition of AI, its benefits and perceived drawbacks and the potential future uses of the technology. They also covered experiences of the technology in practice and spoke about how AI should or and shouldnt be used in litigation. The panel also discussed UK regulation of AI and how this compares to EU equivalents.  For more information about Fountain Courts AI and broader technology-related experience, please visit https://www.fountaincourt.co.uk/expertise/technology/	2023-07-27T00:00:00	4000548	The Fountain Court Chambers Podcast	The Fountain Court Podcast provides legal analysis, horizon scanning and insight into one of the UKs leading commercial chambers.   In this series, well be exploring key legal issues and updates, industry developments and possible future trends relating to the sectors and practice areas in which our barristers specialise. Our members will be joined by industry experts to discuss each topic.   If you have ever wanted to hear directly from experienced silks on specific legal topics, or if you have ever wondered what life is like at a commercial chambers; the Fountain Court Podcast will be of interest to you.	Fountain Court Chambers	33
200	5SokpVXgQHYrXnf5U0tuw1	OMM #15: AI Regulations, Accountability, and the Weight of Consequences	" In this episode of On My Mind, a conversation about AI regulations takes an unexpected turn as the participants discuss the discrepancy between public concerns over AI's impact on jobs and copyright versus its potential for causing significant harm. They highlight instances where AI has been involved in tampering with elections or developing autonomous weapons, but the response to these issues has been relatively muted. The discussion delves into the importance of holding individuals accountable for their actions involving AI, emphasizing that the technology itself is not the problem, but rather how it is used. The participants explore the need for stronger regulations and the enforcement of accountability, especially in cases involving military applications of AI. They question whether making culpability traceable or implementing strict penalties for wrongful actions could deter the misuse of AI technology. However, they acknowledge the challenges of enforcing accountability, particularly when those responsible for upholding the law are the ones breaking it.  Welcome to Cross Labs' podcast, ""On My Mind"", where our team of AI researchers share their thoughts and musings on various topics, ranging from cutting-edge AI research to technology ethics. Join us every week as we explore the latest developments in science and technology, share our breakthroughs, and delve into the big questions that drive our research.  Subscribe to Cross Labs Channel for more videos like this  https://www.youtube.com/c/CrossLabs Follow Cross Labs on Twitter  https://twitter.com/crosslabstokyo Visit Cross Labs website  https://www.crosslabs.org  Intro/Outro Music by [TuesdayNight] (https://pixabay.com/users/tuesdaynight-24999875)  #AI #Science #Intelligence #ArtificialLife #Technology #Research #ArtificialIntelligence #LLMs #LargeLanguageModels #Communication #NewTechnology #Algorithm  "	2023-07-05T00:00:00	491648	On My Mind	A weekly science podcast about intelligence, life, and technology. Produced and hosted by Cross Labs members and community. We are scientists and researchers who think deeply about what intelligence, life, technology are, but also what they could be. And with today's extremely fast rate of algorithm development, we certainly do have a lot on our minds. So sit back, relax, and enjoy the conversation.	Cross Labs	16
201	671l7KCtqbdu0CVy8l9R5Z	European VCs and CEOs Sound Alarm on AI Over-Regulation	In this episode, we delve into the open letter signed by European VCs and tech firms, warning about the potential stifling impact of over-regulating AI as per draft EU laws. We explore their concerns, the implications for innovation in the AI space, and how a balance might be struck between regulation and technological progress.    Inflection AI Report Get on the AI Box Waitlist:https://AIBox.ai/ Investor Contact Email:jaeden@aibox.ai Facebook Community: https://www.facebook.com/groups/739308654562189/ Discord Community:https://aibox.ai/discord   	2023-07-04T00:00:00	708046	AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning	AI Chat is the podcast where we dive into the world of ChatGPT, cutting-edge AI news and its impact on our daily lives. With in-depth discussions and interviews with leading experts in the field, we'll explore the latest advancements in language models, machine learning, and more.   From its practical applications to its ethical considerations, AI Chat will keep you informed and entertained on the exciting developments in the world of AI. Tune in to stay ahead of the curve on the latest technological revolution.	Jaeden Schafer	719
202	6WdOmr9I4nBDrZvW94GLpV	Episode 2: AI News and Regulations	Jim and I have a conversation about several articles that are currently in the news, from an AI watch that analyzes sweat to the current discussions between 22 countries concerning AI regulations. Listen as we give you the latest in AI news.	2023-07-03T00:00:00	1599088	AI In Action: Exploring Tomorrow's Tech Today	Join hosts Maurie and Jim on 'AI in Action: Exploring Tomorrow's Tech Today'. Drawing from their extensive education and technology backgrounds, they navigate AI's rapidly evolving landscape across sectors. Each episode unpacks the latest AI breakthroughs, making this complex field accessible for educators, tech enthusiasts, and curious minds alike. Tune in to explore how AI is transforming our world, today and tomorrow.	Maurie Beasley	44
203	3NSdBcETOdOiryYZKecDaV	Checking in on AI regulation	Generative AI continues to dominate conversations around tech worldwide. Powerful tools such as large language models have already demonstrated clear value, with a range of business use cases and tons of potential to become even more sophisticated as time goes on.Amongst this progress, however, warnings have been raised. Regulators the world over are carefully assessing the risks posed by AI systems, and those behind some of the most recognizable AI companies including OpenAI and Google have weighed in on the potential risks and benefits of the new technology.In this episode, Rory is joined by Ross Kelly, IT Pros cloud computing specialist and staff writer to discuss the recent developments in AI regulation and where it could be headed.For more information, read the show notes here.	2023-06-30T00:00:00	1691663	The ITPro Podcast	The ITPro Podcast is a weekly show for technology professionals and business leaders. Each week hosts Rory Bathgate (@rorybathgate) and Jane McCallion (@JaneMcCallion) are joined by an expert guest to take a deep dive into the most important issues for the IT community. New episodes premiere every Friday. Visit itpro.com/uk/the-it-pro-podcast for more information, or follow ITPro on LinkedIn for regular updates.	IT Pro	245
204	6QfnODGbXfVXf9zHFu1iMq	Video-LLaMA, Mechanical Turk, and EU AI Regulation	Welcome to AI Daily, your go-to podcast for the latest updates in the world of artificial intelligence! In today's episode, we have some banger stories lined up for you. Join us as we dive into the exciting advancements in the realm of Mechanical Turk, the impact of AI in the EU Parliament, and a cutting-edge multimodal technology called Video LLaMA.Key Points:Video LLaMA* A new paper called Video LLaMA, which focuses on turning video and audio into text and understanding them better.* The paper addresses two main challenges: capturing temporal changes in video scenes and integrating audio and visual signals.* The model showcased in the paper demonstrates accurate predictions and understanding of videos, including analyzing images, audio, facial expressions, and speech.* The availability of the model for public use is uncertain as it is currently a research paper, but it highlights the potential of leveraging AI tools like Image Binds and audio transformers to enhance video understanding.Mechanical Turk* A study reveals that a significant portion (around 36-44%) of text summarization tasks on Mechanical Turk are being done by AI models like ChatGPT instead of humans.* The displacement of human workers by synthetic models raises concerns about the availability and quality of real data for training larger language models like GPT-4 and GPT-5.* Detecting synthetic data generated by language models is challenging, and specialized classifiers may be required to distinguish between human-generated and AI-generated text.* The increasing reliance on AI models for tasks like text summarization may lead to the introduction of stricter verification measures, such as keystroke tracking or biometric testing, to ensure authenticity in online assessments and proctoring.EU Parliament & AI* The EU Parliament is taking steps towards AI regulation, although the specifics and implications are unclear.* There are concerns about redundancy in creating separate AI-specific regulations when existing laws could cover related aspects such as data privacy.* The potential impact of AI regulation on startups and small players is uncertain, as compliance requirements and limitations on training AI models could arise.* The regulation aims to address issues like transparency, disclosure of AI-generated content, and prohibitions on certain applications like social scoring and real-time facial recognition. However, some argue that these issues can be legislated without directly tying them to AI.Links Mentioned* Video LLaMA* Mechanical Turk* EU Parliament* Vercel AI* AI SDK* Carbon HealthFollow us on Twitter:* AI Daily* Farb* Ethan* ConnerSubscribe to our Substack:* Subscribe This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aidailypod.com	2023-06-16T00:00:00	1112030	AI Daily	The go-to podcast to stay ahead of the curve when it comes to the rapidly changing world of AI. Join us for insights and interviews on the most useful AI tools and how to leverage them to drive your goals forward. www.aidailypod.com	Daily insights on the latest news, innovations, and tools in the world of AI.	56
205	2dMQ3PT9amaIR6IpIm48Uh	EP8  Dictators, Democracies, and Anarchists, oh my! The ins and outs of Mark Zuckerbergs AI future, EU regulation, and the Reddit mob.	Like and Subscribe to the podcast! Visit us at www.withmarket.com Follow us on Twitter @withmarkethq, @JamesBorow, and @ddruger  Links to articles and topics discussed: Zuckerberg talks about his AI vision with Lex Friedman: https://www.vice.com/en/article/jg55j7/zuckerbergs-vision-for-ai-a-bot-that-makes-ads-and-helps-you-say-happy-birthday-to-your-friends Mob rule; Reddit goes dark: https://techcrunch.com/2023/06/14/hundreds-of-subreddits-plan-to-go-dark-indefinitely-after-reddit-ceos-internal-memo/ EU regulates AI: https://www.nytimes.com/2023/06/14/technology/europe-ai-regulation.html EU calls for a Google ads breakup: https://www.bloomberg.com/news/articles/2023-06-14/google-hit-with-eu-charge-sheet-over-ad-tech-dominance#xj4y7vzkg Montana holds strong on their TikTok ban: https://www.foxnews.com/media/montana-attorney-general-stands-tiktok-ban-despite-lawsuits-spying-tool-chinese-communist-party 	2023-06-15T00:00:00	1261348	Taking Inventory	Ex Snap execs, founders & angel investors James Borow and Daniel Druger, cover all things ad tech, big tech, digital media & more. They are joined for in-depth conversations by special guests that are directly shaping the digital economy.  New episodes drop every Monday. Make sure to subscribe wherever you get your podcasts..	Inventory Ventures	63
206	7Dzi89FxjQdY1ukphcnTfL	Fintech and New Technology Regulation	In this podcast, Ian Duffy, Partner and Ciara Anderson, Senior Associate from our Technology and Innovation Group look at the new regulation around data protection, outsourcing, operational resilience and information security. They look at the recent and emerging laws in the space including the Digital Operations Resilience Act (DORA), Revised Network and Information Directive (NIZ 2) and the soon-to-be finalised Artificial Intelligence Act and how they are relevant to technology services providers. They also look at the current regulatory obligations that apply to fintech providers and how they can ensure they are complying with these.  Disclaimer: The contents of this podcast are to assist access to information and do not constitute legal or other advice. Specic advice should be sought in relation to specic cases. If you would like more information on this topic, please contact a member of our team or your usual Arthur Cox contact.	2023-06-13T00:00:00	919109	AC Audio	Podcast by Arthur Cox LLP	Arthur Cox LLP	57
207	7Lq8hnuUgwLC4S0WXj2iVK	Aleksandr Tiulkanov on AI Policy, Laws, Regulation, and What We Really Need from Government - Voicebot Podcast 329	"Aleksandr Tuilkanov is a lawyer specializing in AI policy, regulation, and legal frameworks. Early work in GDPR and AI regulation before the generative AI frenzy led him to be hired by several governments to help draft AI policy. Few people that have focused on AI policy exclusively or a significant period of time, and his insights are grounded in that experience. His work was influential at the Council of Europe, a key player in the latest EU AI regulatory framework. We talk about imitation vs inappropriate use of data, appropriationism, whether chatbots are capable of defamation, copyright, IP protection, and what the real level of urgency is around regulating generative AI. We also talk about his recent colorfully-titled post, ""Let's not bomb the AI data centers just yet."" Aleksandr Tiulkanov is an AI, data, and digital policy counsel. He earned his law degree at the University of Edinburgh and has specialized in AI regulation since 2015. He is a former senior manager at Deloitte, where he worked with corporations on AI policies. He later worked for several governments, including the Council of Europe, to design their AI policy and regulatory frameworks. He is currently a researcher at the Center for International Intellectual Property Studies and working independently with government agencies on AI policy."	2023-06-08T00:00:00	4930440	Voicebot Podcast	The Voicebot Podcast provides weekly in-depth coverage of the voice and AI industries. We interview the startup founders, voice and AI technology experts, VCs, linguists, scientists and even executives from the large companies moving the space forward. Voice assistant consumer adoption of Amazon Alexa, Google Assistant, Apple Siri and more are covered in depth. Join us.	Bret Kinsella	374
208	0UvLaROxG7y4sho3xfWh28	Debating AI Regulation	"This episode of the Cyberlaw Podcast kicks off with a spirited debate over AI regulation.Mark MacCarthy dismisses  AI researchers recent call for attention to the existential risks posed by AI; he thinks its a sci-fi  distraction from the real issues that need regulationcopyright, privacy, fraud, and competition. Im utterly flummoxed by the determination on the left to insist that existential threats are not worth discussing, at least while other, more immediate regulatory proposals have not been addressed.Mark and I cross swords about whether anything on his list really needs new, AI-specific regulation when Big Content is  already pursuing copyright claims in court, the FTC is already primed to look at AI-enabled fraud and monopolization, and privacy harms are still speculative. Paul Rosenzweig reminds us that we are apparently recapitulating a debate being held behind closed doors in the Biden administration. Paul also points to potentially promising research from OpenAI on  reducing AI hallucination. Gus Hurwitz breaks down the week in FTC news. Amazon settled an FTC claim  over childrens privacy and another over security failings  at Amazons Ring doorbell operation. The bigger story is the  FTCs effort to issue a commercial death sentence on Metas childrens business for what looks to Gus and me more like a misdemeanor. Meta thinks, with some justice, that the FTC is looking for an excuse to rewrite the 2019 consent decree, something Meta says only a court can do. Paul flags a batch of China stories:    Chinas version of Bloomberg has begun quietly limiting the information about Chinas economy that is available to overseas users.   TikTok is accused of storing influencers sensitive financial information In China, contrary to its promises.   Malaysia wont ban Huawei from it 5G network.   The  former Harvard chair convicted of lying about taking Chinese money has been sentenced to just two days in prison.   And another professor charged and then exonerated of commercial espionage has  won the right to sue the FBI for his arrest.   Gus tells us that Microsoft has  effectively lost a data protection case in Ireland and will face a fine of more than $400 million. I seize the opportunity to plug  my upcoming debate with Max Schrems over the Privacy Framework. Paul is surprised to find  even the State Department rising to the defense of section 702 of Foreign Intelligence Surveillance Act (FISA""). Gus asks whether  automated tip suggestions should be condemned as dark patterns and whether the FTC needs to investigate the New York Timess stubborn refusal to let him cancel his subscription. He also previews Californias  impending Journalism Preservation Act. Download 461st Episode (mp3) You can subscribe to The Cyberlaw Podcast using  iTunes,  Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets."	2023-06-06T00:00:00	3578488	The Cyberlaw Podcast	The Cyberlaw Podcast is a weekly interview series and discussion offering an opinionated roundup of the latest events in technology, security, privacy, and government. It features in-depth interviews of a wide variety of guests, including academics, politicians, authors, reporters, and other technology and policy newsmakers. Hosted by cybersecurity attorney Stewart Baker, whose views expressed are his own.	Stewart Baker	100
209	6GEYdc5jc106P3fzERWZ4c	32% Can't Tell AI from Human, US and EU Announce AI Labeling Regulations	"On this episode, we're delving into a captivating Turing Test experiment that took the internet by storm. Remember the viral Twitter sensation ""Human or Not""? As it happens, that was more than just a game  it was the most expansive Turing Test ever, designed to gauge people's abilities to distinguish AI bots from humans.   Get on the AI Box Waitlist:https://AIBox.ai/ Investor Contact Email: jaeden@aibox.ai Join our ChatGPT Community:https://www.facebook.com/groups/739308654562189/ Follow me on Twitter:https://twitter.com/jaeden_ai    "	2023-06-05T00:00:00	787272	AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning	AI Chat is the podcast where we dive into the world of ChatGPT, cutting-edge AI news and its impact on our daily lives. With in-depth discussions and interviews with leading experts in the field, we'll explore the latest advancements in language models, machine learning, and more.   From its practical applications to its ethical considerations, AI Chat will keep you informed and entertained on the exciting developments in the world of AI. Tune in to stay ahead of the curve on the latest technological revolution.	Jaeden Schafer	719
210	0xXlRefHRUa02ZLpH04Q9O	Artificial Intelligence and the Future of Regulation	In this episode of Need to Know, Kellee Wicker, Director of the Wilson CentersScience and Technology Innovation Program (STIP), speaks with host John Milewski about artificial intelligence. They discuss concerns about AI, the need for guidelines and regulations, and the recent statement from the White House. Kellee recaps a discussion she moderated for Members of Congress that featured Sam Altman, CEO of OpenAI and creator ofChatGPT. She also provides tips on whereregulators and others can find the best and most accurate information about the latest developments in AI.	2023-05-31T00:00:00	861674	Need to Know	The Need To Know podcast is a production of the Office of Congressional Relations at the Woodrow Wilson International Center for Scholars. Each episode we will bring our nonpartisan research to life through interviews with experts and practitioners covering the world. Our goal is to bring the best independent research, open dialogue and actionable ideas to congressional staff, policy makers, and anyone else who needs to know.	Wilson Center	123
211	0FPnlYz3dj05zmTpvFisfm	Twitter's Legal Battles, Voice Cloning, and AI Regulation Hearings	"Tune in and level up your growth game with The Sauce Pod  Join hosts Taylor Peterson and Anna Klawitter to delve into the hottest social, tech, and AI news. Every week, they serve the secret sauce for creators, marketers, and entrepreneurs bootstrapping growth online. These week, we've got:  The Supreme Court continues protecting social media giants Instagram introduces Stories scheduling feature Montana becomes first official state to ban TikTok TikTok launches Effect Creator Rewards program for AR creators TikTok tests location discovery and prompts for user reviews Twitter gets heat for pirated movie uploads Twitter's beef with Microsoft around data harvesting Investigation into Twitter ""hotel"" at HQ Official ChatGPT app now available on iOS Highlights from Senate hearing on AI regulation Apple's voice cloning feature raises privacy concerns        Don't forget to subscribe to our YouTube and Twitter channels for more exciting updates, and join our weekly newsletter on Substack for exclusive content       Follow us onYouTube&Twitterto stay up to date on the latest. Subscribe onSubstack.   Get full access to The Sauce at  thesauceffs.substack.com/subscribe  Get full access to The Sauce at  thesauceffs.substack.com/subscribe"	2023-05-29T00:00:00	3048515	The Sauce Pod	NOT your mothers' marketing podcast. The Sauce Pod serves up the secret sauce for content creators and solo-entrepreneurs looking to thrive in the digital age.   Every week, we bring you real talk about what's going on in social media news, internet culture, and how to make the most of it. Get the latest in platform updates, viral trends, audience-building tips, and more. We share what creators need to know as they're building a brand online along with a delicious dose of hot takes and the (occasional) unpopular opinion.   Co-hosted by Taylor Peterson, an ex-agency marketer and former tech reporter, along with Anna Klawitter, content creator and editor of The Sauce newsletter.  Get The Sauce delivered to your inbox: https://thesauceffs.substack.com/	Taylor Peterson	33
212	4TLGeqD27Lg7X7NE98l4E2	Advanced AI - are we repeating the mistakes of the past?	Toby Walsh is an expert on Artificial Intelligence. He recently declined an offer to sign an open letter calling for a moratorium on the technology's further development, but he's no techno-utopian. In this feature interview, recorded at the Brisbane Writers Festival, he explains his position and warns the world risks repeating the mistakes made through the unregulated release of social media at the beginning of the century.	2023-05-21T00:00:00	2915343	Future Tense	A critical look at new technologies, new approaches and new ways of thinking, from politics to media to environmental sustainability.	ABC listen	113
213	2Csx4odSJ5BLWjRgRJnaF9	Wed 05/17 - AI Leaders Call for Regulation	Tune in for your daily check in with Adam Kerpelman and Mackenzie Bowes on the last 24 hours in AI Acceleration!Subscribe to the podcast: https://acceleratedaily.transistor.fm/Subscribe on Youtube for the uncut chat: https://www.youtube.com/@UseMissionControl/streamsAdam: https://www.linkedin.com/in/adamkerpelman/Mackenzie: https://www.linkedin.com/in/mack-bowes-42a346215/Today's Links:I went ahead and did stereotypes for all 50 US states, full video in commentshttps://www.reddit.com/r/midjourney/comments/13jopgf/i_went_ahead_and_did_stereotypes_for_all_50_us/Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligencehttps://www.bbc.com/news/world-us-canada-65616866AI threatens humanitys future, 61% of Americans say: Reuters/Ipsos pollhttps://www.reuters.com/technology/ai-threatens-humanitys-future-61-americans-say-reutersipsos-2023-05-17/Im an ER doctor. Heres how Im already using ChatGPT to help treat patientshttps://www.fastcompany.com/90895618/how-a-doctor-uses-chat-gpt-to-treat-patientsThe Forever Labor Shortagehttps://www.businessinsider.com/baby-boomer-retirement-surge-spark-forever-labor-shortage-jobs-workers-2023-5____________Mission Control on the web:https://usemissioncontrol.com	2023-05-17T00:00:00	970788	Accelerate Daily - The Latest in AI - News | Tips | Tools	Trying to keep up as AI accelerates toward escape velocity?  Let the Mission Control crew help. Tune in daily for a quick take on the last 24 hours in AI news.	Mission Control	24
214	78CzZvWk9kqyoYjYZjG1Pb	Frontier AI Regulation: Managing Emerging Risks to Public Safety	Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term frontier AI models  highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a models capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.Source:https://arxiv.org/pdf/2307.03718.pdfNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.	2023-05-13T00:00:00	1799157	AI Safety Fundamentals: Governance	Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance	BlueDot Impact	60
215	2wTATzfX3mzCwUaxVx9JD1	Avoiding Extreme Global Vulnerability as a Core AI Governance Problem	Much has been written framing and articulating the AI governance problem from a catastrophic risks lens, but these writings have been scattered. This page aims to provide a synthesized introduction to some of these already prominent framings. This is just one attempt at suggesting an overall frame for thinking about some AI governance problems; it may miss important things. Some researchers think that unsafe development or misuse of AI could cause massive harms. A key contributor to some of these risks is that catastrophe may not require all or most relevant decision makers to make harmful decisions. Instead, harmful decisions from just a minority of influential decision makersperhaps just a single actor with good intentionsmay be enough to cause catastrophe. For example, some researchers argue, if just one organization deploys highly capable, goal-pursuing, misaligned AIor if many businesses (but a small portion of all businesses) deploy somewhat capable, goal-pursuing, misaligned AIhumanity could be permanently disempowered. The above would not be very worrying if we could rest assured that no actors capable of these harmful actions would take them. However, especially in the context of AI safety, several factors are arguably likely to incentivize some actors to take harmful deployment actions: Misjudgment: Assessing the consequences of AI deployment may be difficult (as it is now, especially given the nature of AI risk arguments), so some organizations could easily get it wrongconcluding that an AI system is safe or beneficial when it is not. Winner-take-all competition: If the first organization(s) to deploy advanced AI is expected to get large gains, while leaving competitors with nothing, competitors would be highly incentivized to cut corners in order to be firstthey would have less to lose.Original text:https://www.agisafetyfundamentals.com/governance-blog/global-vulnerabilityNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.	2023-05-13T00:00:00	700342	AI Safety Fundamentals: Governance	Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance	BlueDot Impact	60
216	6D9EvgPwXirN4bu5CiaWFm	Overview of How AI Might Exacerbate Long-Running Catastrophic Risks	Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).Source:https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbationNarrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.---A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.	2023-05-13T00:00:00	1443160	AI Safety Fundamentals: Governance	Listen to resources from the AI Safety Fundamentals: Governance course!https://aisafetyfundamentals.com/governance	BlueDot Impact	60
217	3SdvIcKmJJVbQs0DKW2LpO	China's draft regulations on generative AI, with Kendra Schaefer and Jeremy Daum	This week on Sinica, Kendra Schaefer, a partner specializing in technology at China-focused consultancy Trivium, and Jeremy Daum, Senior Research Scholar in Law and Senior Fellow at the Paul Tsai China Center.discuss the new draft regulations published in April by the Cyberspace Administration of China that will, when passed, govern generative AI in China. Will it choke off innovation, or create conditions for the safe development of this world-changing technology?04:36  What is the difference between deep synthesis internet services and generative AI?06:17  Areas affected by the set of newest regulations: recommendation algorithms,deep fakes11:15  Major national regulations governing generative AI in China vs. in the West.15:35  The question of the privacy policy in China18:25  How far along are the tech companies when it comes to truly applying generative AI?24:16  Main areas of concern about ChatGPT raised in China and the US. What are the government and companies doing to deal with these issues?28:04  Is the idea to label AI-generated content sufficient?38:28  Requirements and concerns for training data for generative AI. Questions of accuracy and authenticity.47:21  Will the generative AI stay in the social media landscape, or spread toward theindustrial sector?50:12  To what extent will export restrictions affect the development of generative AI inChina?A transcript of this podcast is available at TheChinaProject.comRecommendations:Kendra: Cobalt Red: How the Blood of the Congo Powers Our Lives by Siddharth KaraJeremy: The School for Good Mothers by Jessamine ChanKaiser:The Earth Transformed: An Untold History by Peter Frankopan; Belafonte: At Carnegie Hall by Harry Belafonte; andBelafonte Returns to Carnegie Hall (Live) by Harry BelafonteSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.	2023-05-11T00:00:00	3965335	Sinica Podcast	A weekly discussion of current affairs in China with journalists, writers, academics, policymakers, business people and anyone with something compelling to say about the country that's reshaping the world. Hosted by Kaiser Kuo.	Kaiser Kuo	445
218	0RQpa7wbT0hhtRXScfMil4	Artificial Intelligence is Booming, How Should it Be Regulated?	As the use of artificial intelligence explodes, government officials are trying to figure out how best to regulate the technology. Already, generative AI companies are producing software that can replicate voices, create stylized portraits, and produce thousands of fake online reviews. Experts fear that internet harassment, identity fraud and spread of misinformation could become exponentially worse with easy access to AI and warn regulation is crucial to head off potential harms. But, what regulations would be helpful? And what regulations might cause more harm than good? We dive into potential ways to regulate AI and what consumers can do in the meantime to protect themselves. Guests: Jennifer King Ph. D.,privacy and data policy fellow, Stanford Institute for Human-Centered Artificial Intelligence Rumman Chowdhury,responsible AI developer, leader, speaker, founder, investor Ben Zhao,professor of computer science and director of graduate studies, University of Chicago	2023-05-08T00:00:00	3160999	KQED's Forum	Forumtells remarkable and true stories about who we are and where we live.In the first hour, Alexis Madrigal convenes the diverse voices of the Bay Area, before turning to Mina Kim for the second hour to chronicle and center Californians experience. In an increasingly divided world, Mina and Alexis host conversations that inform, challenge and unify listeners with big ideas and different viewpoints.  Want to call/submit your comments during our liveForumprogram Mon-Fri, 9am-11am? We'd love to hear from you! Please dial866.SF.FORUM or(866) 733-6786 or email forum@kqed.org,tweet, or post onFacebook.	KQED	2335
219	664F2ZJkG6Ow4n3nN0rvIt	#14 Exploring AI with Kim Alban: Microsoft, Rivalries, and Regulations | Bad Decisions Podcast	Kim Alban is a talented product designer and multidisciplinary creator within the XR and AI space. She has been featured by Adobe, WIRED and Lenslist for her creative work. She always shares her experiences, resources, and insights across various platforms like TikTok, Instagram, and YouTube. In this episode, we talked about Microsoft's latest innovations, the AI race between tech giants and their approaches to AI. We also touched on the importance of embracing AI while ensuring regulations, and delved into the reasons behind people's aversion to AI.   Episode 14 Timestamps: 00:00:00 Introduction  00:04:08 Coffee and Sleep 00:10:40 Microsoft's AI-powered Bing  00:14:45 Google vs. Microsoft 00:28:14 The White House on AI 00:32:40 Embracing AI & Regulations 00:35:18 Godfather of AI left Google  00:38:42 AI awareness  00:43:46 Why people hate AI? 00:47:29 AI Taking over jobs?  00:53:55 Career advice 00:57:05 Kim in 2025  Sign up for Kim's Newsletter: https://fabulous-inventor-4740.ck.page/kimalban  Find out more about Kim: https://www.kimalban.com/ https://www.instagram.com/kimalban/ https://www.youtube.com/channel/UCqcdOkpI-lhDeT-dWjh1iag https://www.tiktok.com/@kimalban  Bad Decisions Daily Vlog: https://www.youtube.com/playlist?list=PLIn-yd4vnXbgWK6FtXAAHsDpHpiMpEhyF  Bad Decisions Coffee Break Audio Podcast : Spotify, Apple Podcasts and Google Podcast https://podcasters.spotify.com/pod/show/badxstudio  If you wanna see us to do cool things follow us here too: Instagram:https://www.instagram.com/badxstudio/ Twitter: https://twitter.com/badxstudio TikTok: https://www.tiktok.com/@badxstudio  LinkedIn: https://www.linkedin.com/company/badxstudio Website: https://baddecisions.studio/  Our personal handles: (if you wanna stalk us) https://twitter.com/Farhads__ https://twitter.com/farazshababi  https://www.instagram.com/farhad_sh/ https://www.instagram.com/farazshababs/  https://www.linkedin.com/in/farhadshababi/ https://www.linkedin.com/in/farazshababi/   #ai #artificialintelligence #microsoft #google #bingai #bing #socialmedia    #badxstudio #BadDecisionsStudio #podcast #chatgpt 	2023-05-06T00:00:00	3768768	Bad Decisions Podcast	Every week we promise you an opportunity to either learn something new or walk away with a fresh burst of inspiration. Our goal is to share everything we've learned and researched at Bad Decisions, a creative studio based in Vancouver and Dubai, with you. Additionally, we bring you the best in the industry, allowing them to share their journeys and knowledge.	Bad Decisions Studio	48
220	7MjpDo3Ov8HDwv8ihkGaJK	Artificial intelligence regulation	Kean Birch is director of the Institute for Technoscience and Society at York University Learn more about your ad choices. Visit megaphone.fm/adchoices	2023-05-05T00:00:00	527124	Shaye Ganam	A place to talk. To come together. To be heard. Shaye Ganam is taking the conversation province-wide on 630 CHED in Edmonton and QR Calgary - 107.3 FM and 770 AM, creating a collective town square to rehash the old and imagine the new for every part of this province. Tune in Monday to Friday from 9 a.m. to noon to join the conversation. FollowShaye Ganamon Twitter to keep up with your host wherever he goes.	Corus Radio	4138
221	6zUl0NigOnJLcAcTiQt91f	EDITORIAL: Artificial intelligence needs intelligent regulation | May 4, 2023	EDITORIAL: Artificial intelligence needs intelligent regulation | May 4, 2023Subscribe to The Manila Times Channel - https://tmt.ph/YTSubscribeVisit our website at https://www.manilatimes.netFollow us:Facebook - https://tmt.ph/facebookInstagram - https://tmt.ph/instagramTwitter - https://tmt.ph/twitterDailyMotion - https://tmt.ph/dailymotionSubscribe to our Digital Edition - https://tmt.ph/digitalCheck out our Podcasts: Spotify - https://tmt.ph/spotifyApple Podcasts - https://tmt.ph/applepodcastsAmazon Music - https://tmt.ph/amazonmusicDeezer: https://tmt.ph/deezerStitcher: https://tmt.ph/stitcherTune In: https://tmt.ph/tunein#TheManilaTimes#EDITORIAL Hosted on Acast. See acast.com/privacy for more information.	2023-05-03T00:00:00	360829	The Manila Times Podcasts	Keep Up With The Times, Voice Of The Times and The Manila Times Latest Stories Hosted on Acast. See acast.com/privacy for more information.	The Manila Times	5000
222	4xx2zZqHj5QiiZqloaGCU1	EDITORIAL: Artificial intelligence needs intelligent regulation | May 4, 2023	EDITORIAL: Artificial intelligence needs intelligent regulation | May 4, 2023Subscribe to The Manila Times Channel - https://tmt.ph/YTSubscribeVisit our website at https://www.manilatimes.netFollow us:Facebook - https://tmt.ph/facebookInstagram - https://tmt.ph/instagramTwitter - https://tmt.ph/twitterDailyMotion - https://tmt.ph/dailymotionSubscribe to our Digital Edition - https://tmt.ph/digitalCheck out our Podcasts: Spotify - https://tmt.ph/spotifyApple Podcasts - https://tmt.ph/applepodcastsAmazon Music - https://tmt.ph/amazonmusicDeezer: https://tmt.ph/deezerStitcher: https://tmt.ph/stitcherTune In: https://tmt.ph/tunein#TheManilaTimes#EDITORIAL Hosted on Acast. See acast.com/privacy for more information.	2023-05-03T00:00:00	360829	The Manila Times Podcasts	Keep Up With The Times, Voice Of The Times and The Manila Times Latest Stories Hosted on Acast. See acast.com/privacy for more information.	The Manila Times	5000
223	2JcJ9LSQSIfObGFIFBmuYU	Should the US Gov't Have to Review and License AI Tools?	That's the question the Biden Administration is asking American citizens to comment on. WSJ and others reported today that the Administration is nervous about AI being used for crime and disinformation, and so is collecting public comments on the idea of having an FDA-like body to review and approve AI tools.	2023-04-12T00:00:00	604473	The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis	A daily news analysis show on all things artificial intelligence. NLW looks at AI from multiple angles, from the explosion of creativity brought on by new tools like Midjourney and ChatGPT to the potential disruptions to work and industries as we know them to the great philosophical, ethical and practical questions of advanced general intelligence, alignment and x-risk.	Nathaniel Whittemore	385
224	2DsuCkR7g6EYR6Y9elRDBl	The Regulation of AI	In this episode, John talks with Courtney Bowman, the Global Director of Privacy and Civil Liberties Engineering at Palantir Technologies, about the challenges of regulating AI technology. They discuss the need for regulatory regimes to address the different types of AI technologies in use today including facial recognition, lending and insurance decision-making, healthcare tracking, and genetic sequencing, among other applications. They also discuss the different approaches to AI regulation in the US and the EU, and whether regulation should be all-encompassing or targeted to specific technological contexts. Finally, they discuss how businesses should proceed now before future AI regulations have taken their final form.Podcast Link: Law-disrupted.fmHost: John B. Quinn Producer: Alexis HydeMusic and Editing by: Alexander Rossi	2023-04-05T00:00:00	2878197	Law, disrupted	Law, disrupted is a podcast that dives into the legal issues emerging from cutting-edge and innovative subjects such as SPACs, NFTs, litigation finance, ransomware, streaming, and much, much more! Your host is John B. Quinn, founder and chairman of Quinn Emanuel Urquhart & Sullivan LLP, a 900+ attorney business litigation firm with 29 offices around the globe, each devoted solely to business litigation. John is regarded as one of the top trial lawyers in the world, who, along with his partners, has built an institution that has consistently been listed among the Most Feared litigation firms in the world (BTI Consulting Group), and was called a global litigation powerhouse by The Wall Street Journal. In his podcast, John is joined by industry professionals as they examine and debate legal issues concerning the newest technologies, innovations, and current eventsand ask whats next?	Law, disrupted	113
225	3xqESwR96k0BckkWv7OsQw	ChatGPT Under Fire: Tech Titans Call for AI Hiatus, AI Regulation, The Future of Content Marketing	"IntroductionThis week's episode covers influential tech figures calling for a 6-month AI hiatus, Levi's partnership with Lalaland.ai for AI-generated models, Bing's ads for chat, the UK government's AI White Paper, Brian Halligan's thoughts on AI in content marketing, and GoCharlie as the tool of the week.Story 1: Tech Experts Call for 6 Month AI Hiatus in Open LetterPaul discusses an open letter signed by influential figures like Elon Musk, Steve Wozniak, and Andrew Yang, calling for a temporary halt of at least six months on AI system training.The letter highlights the need for adequate safety measures, governmental regulations, and strategies to address AI's societal effects.Story 2: Levi's Use AI-Generated Models to Increase DiversityMartin talks about Levi Strauss & Co's partnership with Lalaland.ai to create AI-generated avatars for increased diversity among its models.Marketers can utilize AI-generated models to create more inclusive and diverse advertising campaigns, resonating with a broader audience.Story 3: Bing Announces Ads for ChatPaul shares Microsoft's experiments with ads for Bing chat, offering marketers the opportunity to advertise within chat interfaces.This new marketing strategy can help brands showcase their commitment to diversity and inclusion.Story 4: UK Government Unveils AI White PaperMartin covers the UK government's release of an AI white paper, detailing five guiding principles for responsible AI use.The UK's ""light-touch"" approach may make it an outlier in AI regulation, potentially attracting companies pushing AI boundaries.Story 5: Brian Halligan on the Future of Content MarketingPaul explores Brian Halligan's insights on the future of content marketing, including AI-driven optimization, Trust Content, and balancing AI-driven and human-driven content strategies.Marketers must adapt to market events faster and embrace the evolving roles of AI-driven tools.Tool of the Week: GoCharlieMartin introduces GoCharlie.AI, a generative AI research and development company focused on marketing, advertising, and sales.Their content repurposing feature can help marketers save time and increase efficiency in content creation."	2023-03-31T00:00:00	3654305	Artificially Intelligent Marketing	Welcome to Artificially Intelligent Marketing, the weekly podcast that explores the intersection of artificial intelligence and marketing. Join your hosts, Paul Avery and Martin Broadhurst, as they dive deep into the latest stories, tools, and applications in AI for marketing and chat with thought leaders in the field to uncover insights about the future of AI in marketing.In each episode, Paul and Martin will bring you fascinating stories about how AI is transforming marketing, from chatbots and personalisation to predictive analytics and voice assistants. They'll share their own experiences and insights, and interview experts and practitioners from across the marketing world to help you understand the potential of AI and how to make the most of it in your own marketing strategy.Whether you're a marketer looking to stay ahead of the curve, a data scientist interested in the latest techniques, or just someone fascinated by the potential of artificial intelligence, The Artificially Intelligent Marketing Podcast has something for you. So tune in every week to stay up to date with the latest developments in AI and marketing, and join the conversation about the future of this exciting field.	Paul Avery and Martin Broadhurst	44
226	4idfPbTfWrFE4kgJXPJDk4	AI and the Need for Government Regulation	"Welcome to ""Beyond the Screen"", the first AI-made podcast hosted by Frank Nanninga. In this episode, we delve into various topics surrounding AI, including the need for government regulation, anticipatory disinformation, GitHub's integration of AI for pair programming, and the potential for AI language models to spread misinformation. Our discussion highlights the rapid advancement of AI technology and its impact on safety and ethics. We also explore the dangers of political deepfakes and the potential for chatbots to spread misinformation. Join us on this captivating journey of discovery and don't miss out on future episodes of ""Beyond the Screen""!     Support this podcast  "	2023-03-22T00:00:00	273949	Beyond the Screen	"Explore the world of AI, ChatGPT, and AR with ""Beyond the Screen"" - a unique podcast hosted by Frank Nanninga. Each episode takes a deep dive into cutting-edge technology and its impact on our world. What sets ""Beyond the Screen"" apart is that it's entirely AI-powered. The artwork is created by Midjourney, news sourced by OpenAI, and even the host's voice is cloned. - each episode is crafted by AI to create an immersive experience.  The best part? ""Beyond the Screen"" is available in English, French, German, Spanish, and Hindi with the same AI-generated voice."	Frank Nanninga	78
227	4wRqb0DatcfHAAO95g58l4	#217 AI regulation is a global concern - Where will Australia fit in among China, the US and EU? With Felipe Flores, Data Futurology Founder and Podcast Host.	AI is a powerful tool, and as enterprise and government find more sophisticated ways to leverage the technology, there will be untold benefits returned to customers. At the same time, the responsible use of AI is of significant concern to the global population, and people are watching how its use is regulated closely. On this weeks Data Futurology podcast, Felipe Flores presents an update on the status of regulation across Europe, China, and the US, and poses the question about whether AI regulation needs to be a global, rather than regional response. Perhaps surprisingly, Chinas taken the lead in regulating how business uses AI, Flores said. The regulation says that businesses must notify users when an AI algorithm is playing a role in determining which information to display to them and give users the option to opt out of being targeted. The regulation also prohibits algorithms that use personal data to offer different prices to different consumers. It is really interesting that China moved early. Meanwhile, in the EU, the drafted regulation would categorise AI applications into one of four risk profiles, with oversight and accountability being scaled in kind. And in the US, much of the focus around regulation at the federal level is concerned with the potential for discrimination, while states are being left to develop their own broader frameworks. Australia, which doesnt yet have regulation, does have an ethical framework, which is an indication of where future regulation might go. Flores runs through that framework in this podcast as well. For an in-depth look into the exciting and dynamic discourse around AI regulation across the world, tune into the podcast! Enjoy the show! Thank you to our sponsor, Talent Insights Group! Join us in Sydney for Ops World: https://www.datafuturology.com/opsworld Join our Slack Community: https://join.slack.com/t/datafuturologycircle/shared_invite/zt-z19cq4eq-ET6O49o2uySgvQWjM6a5ng  What we discussed: 00:00 Introduction 2:05 Discussion around AI regulation and how should different countries tackle it. 2:50 How have the US, China and the EU approached this. 5:00 EU regulations 8:05 USA regulations 9:40 Thoughts and comparison on the three approaches. 11:10 Whats happening in Australia.  Quotes:  In March 2022, China passed a regulation that governs companies and their use of AI. The regulations applies to online recommender systems. They say the AI needs to be used in ways that are moral, ethical, accountable, transparent and that disseminate positive energy.  Companies (in China) are expected to submit their algorithms to the government for review when they are being used at scale.  The EU separates the ways AI can be used into four bands according to the risk involved. They have minimal risk, limited risk, high risk and unacceptable risk. The unacceptable risk covers things like social surveillance, facial recognition, etc.  The US congress enacted a National AI Initiative Act, focused on improving research development, understanding AI and having an AI strategy within the country.    ---   Send in a voice message: https://podcasters.spotify.com/pod/show/datafuturology/message	2022-11-30T00:00:00	936143	Data Futurology - Leadership And Strategy in Artificial Intelligence, Machine Learning, Data Science	Artificial intelligence is a tremendously beneficial technology that's advancing at an incredibly rapid pace.  As more and more organisations adopt and implement AI we find that the main challenges are not in the technology itself but in the human side, ie: the approaches, chosen problems and what's called 'the last mile', etc.   That's why Data Futurology focuses on the leadership side of AI and how to get the most value from it.  Join me, Felipe Flores, a Data Science executive with almost 20 years of experience in the space. Every week I speak with top industry leaders from around the world	Felipe Flores	266
228	5TtBidNWrti15KKOlG7sWI	Episode 148: Christelle Tessono on Bringing a Human Rights Lens to AI Regulation in Bill C-27	Bill C-27, the governments privacy and artificial intelligence bill is slowly making its way through the Parliamentary process. One of the emerging issues has been the mounting opposition to the AI portion of the bill, including a recent NDP motion to divide the bill for voting purposes, separating the privacy and AI portions. In fact, several studies have been released which place the spotlight on the concerns with the governments plan for AI regulation, which is widely viewed as vague and ineffective. Christelle Tessono is a tech policy researcher based at Princeton University's Center for Information Technology Policy (CITP). She was one of several authors of a joint report on the AI bill which brought together researchers from the Cybersecure Policy Exchange at Toronto Metropolitan University, McGill Universitys Centre for Media, Technology and Democracy, and the Center for Information Technology Policy at Princeton University. Christelle joins the Law Bytes podcast to talk about the report and what she thinks needs to change in Bill C-27.	2022-11-28T00:00:00	1576545	Law Bytes	In recent years the intersection between law, technology, and policy has exploded as digital policy has become a mainstream concern in Canada and around the world. This podcast explores digital policies in conversations with people studying the legal and policy challenges, set the rules, or are experts in the field. It provides a Canadian perspective, but since the internet is global, examining international developments and Canadas role in shaping global digital policy is be an important part of the story.  Lawbytes is hosted by Michael Geist, a law professor at the University of Ottawa, where he holds the Canada Research Chair in Internet and E-commerce Law and where he is a member of the Centre for Law, Technology and Society.	Michael Geist	201
229	00hD5svlFb7cCSDxdtUfaY	#23: Google Penalizes AI-Generated Content, Responsible AI Guidelines, and AIs Impact on Local News	This week Paul and Mike talk about three news stories and happenings in the world of artificial intelligence, and they break down their importance to marketers. In a word (or two): buckle up. Well-known marketer Neil Patel recently revealed the results of Googles latest algorithm updates on sites he owns that have AI-generated copyand the results werent pretty. Patel disclosed that he has 100 experimental sites that use AI-written content. He claims the sites are simply to figure out how Google perceives AI-written content, not to game the algorithm. Regardless of the motivation, he sure found out.  Next, Boston Consulting Group, BCG, recently released its guidelines for how companies should approach AI based on its Responsible AI Leader Blueprint. BCG defines responsible AI as developing and operating artificial intelligence systems that align with organizational values and widely accepted standards of right and wrong while achieving transformative business impact. And finally, earlier this year, the Partnership on AI did work on better understanding how AI will change the local news landscape by talking to 9 different experts in the space, including prominent media outlets and technologists. The Partnership on AI is a major nonprofit that was founded by Amazon, Facebook, Google, DeepMind, Microsoft, and IBM to research and share best practices around the development and deployment of artificial intelligence.  Listen to the conversation.	2022-11-02T00:00:00	2635885	The Artificial Intelligence Show	The Artificial Intelligence Show (formerly The Marketing AI Show) is the podcast that helps your business grow smarter by making AI approachable and actionable.This podcast is brought to you by the creators of the Marketing AI Institute, AI Academy for Marketers, and the Marketing AI Conference (MAICON). Hosts Paul Roetzer, founder and CEO of Marketing AI Institute, and Mike Kaput, Chief Content Officer, break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join Paul and Mike on The AI Show as they work to accelerate AI literacy for all.	Paul Roetzer and Mike Kaput	100
230	3fdL2HlpfzwwXY3SDoodfj	Understanding tech-facilitated abuse; and problems in space	Abuse facilitated by digital technology is on the rise. Abuse is abuse, no matter who commits it and what form it takes, but we need to better understand the peculiarities of this specific kind of abuse.New research in Australia suggests that many of us are the perpetrators as well as the victims. Also, regulating rocket launches and minimising space pollution  low Earth orbit may be reaching a tipping point.	2022-10-02T00:00:00	1748532	Future Tense	A critical look at new technologies, new approaches and new ways of thinking, from politics to media to environmental sustainability.	ABC listen	113
231	1jZ46BoHVzO5hYRr7K4OUD	AI Leaders Podcast #29: Policy and Regulation Frameworks	The increase in regulatory attention facing AI has put the topic of AI regulation squarely on the C-suite agenda. Organizations are trying to figure out how to respond, however the situation is complex as strategies will vary by country, differing cultural values and multiple philosophies. Listen now to hear Roger Taylor, Responsible AI advisor and former Chair of the Centre for Data Ethics and Innovation, and Ali Shah, Accenture's Global Principal Director for Responsible AI, discuss.	2022-09-19T00:00:00	2170810	Accenture AI Leaders Podcast	The Accenture AI Leaders Podcast series takes a closer look at industry trends, opportunities, and challenges related to AI, analytics, and data that are top of mind for CDOs, CAOs, and CIOs from leading organizations across the world. Listen in as these leaders talk one-on-one about their experiences driving AI-powered initiatives across their organizations.	Accenture	61
232	6hJDQG4zJJCAFqHfOpFqBE	AI in Compliance: Handling Sanctions, Communications, and Novel Threats - with Thomas Mangine of BMO	This week were continuing our series on artificial intelligence in compliance. Our guest is Thomas Mangine, Director of AML and Risk Reliance for the Bank of Montreal. In this episode, Thomas shares his unique perspective on staying ahead of adversaries from financial services and defense points of view. He also discusses how data is becoming more valuable and what particular kinds of data are creating new opportunities for emerging forms of AI capabilities. Thomas also details several unique compliance-related use cases and circumstances regarding concerns about sanctions. This episode is part of our broader series on compliance brought to you by Smarsh. If you havent already, be sure to tune into the previous episodes in this series at podcast.emerj.com to discover more key insights from leaders in the industry.	2022-09-08T00:00:00	2244675	The AI in Business Podcast	"The AI in Business Podcast is for non-technical business leaders who need to find AI opportunities, align AI capabilities with strategy, and deliver ROI.  Each week, Emerj Artificial Intelligence Research CEO Daniel Faggella interviews top AI executives from Fortune 500 firms and unicorn startups - to uncover trends, use-cases, and best practices for practical AI adoption.  Subscribe to Emerj's weekly AI newsletter by downloading our ""Beginning with AI"" PDF guide: https://emerj.com/beg1"	Daniel Faggella	821
233	3okrGNFkAOusvHaK9qOooe	AI Today Podcast: Ethical & Responsible AI Series: AI & Data Fairness & Bias	Organizations are increasingly making use of AI systems to power their operations and enable a wide range of applications from the trivial to the mission-critical. As a result its more important than ever to understand the many complex issues related to AI and Data fairness and bias. In this Ethical and Responsible AI Series hosts Kathleen Walch and Ron Schemlzer dig deeper into the role data plays in AI, and how building AI systems with fairness in mind is critical. Continue reading AI Today Podcast: Ethical & Responsible AI Series: AI & Data Fairness & Bias at Cognilytica.	2022-08-10T00:00:00	1674800	AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion	AI Today Podcast	AI & Data Today	447
234	3N2SjGhvUcWN2SZ4OdFy0R	AI in Insurance - Addressing Compliance Considerations - with Pardeep Bassi of WTW	In this episode, were focusing on compliance considerations in the insurance world. Our guest this week is Pardeep Bassi. He is currently Global Proposition Leader of Data Science for WTW, or Willis Towers Watson, a publicly traded financial services firm based in the United Kingdom. This week Pardeep covers two different topics with us. First, he discusses some areas where regulation intersects with AI applications in insurance and what the potential data risks for elements such as transparency, personal data use, etc., are. Secondly, Pardeep dives into how leaders might consider these regulatory rules in their adoption strategy. Not only to meet regulatory compliance rules but also to potentially use AI to address them. This episode is brought to you by Smarsh and is part of a broader series on AI applications for compliance and communications intelligence. To learn more about how to reach Emerjs global executive audience with Emerj Media, visit emerj.com/ad1.	2022-08-04T00:00:00	1854145	The AI in Business Podcast	"The AI in Business Podcast is for non-technical business leaders who need to find AI opportunities, align AI capabilities with strategy, and deliver ROI.  Each week, Emerj Artificial Intelligence Research CEO Daniel Faggella interviews top AI executives from Fortune 500 firms and unicorn startups - to uncover trends, use-cases, and best practices for practical AI adoption.  Subscribe to Emerj's weekly AI newsletter by downloading our ""Beginning with AI"" PDF guide: https://emerj.com/beg1"	Daniel Faggella	821
235	3iQw5PR3D9jZk0K4oPJB0v	AI for Cyber Insurance Policy Review - with Pamela Negosanti of Expert.AI	Todays guest is Pamela Negosanti, the Head of Sector Strategies for Financial Services at Expert.AI. In todays episode, Pamela speaks to us about the policy review process for cyber insurance. There are two main stand-out points: first, how a new technology shift and further regulation make AI almost a requirement for a particular business workflow, and second, some concrete workflow advice for applying artificial intelligence to contracts and text documents. Theres an evident business value here and a very clear before and after picture of where AI helps to augment the expertise of human experts who are doing policy reviews. This episode is brought to you by Expert.AI. If youre interested in reaching Emerjs global executive audience through sponsored podcasts, content, co-branded research, and more, visit emerj.com/ad1 to learn more about Emerjs Media Services for enterprise AI vendors.	2022-04-19T00:00:00	1859030	The AI in Business Podcast	"The AI in Business Podcast is for non-technical business leaders who need to find AI opportunities, align AI capabilities with strategy, and deliver ROI.  Each week, Emerj Artificial Intelligence Research CEO Daniel Faggella interviews top AI executives from Fortune 500 firms and unicorn startups - to uncover trends, use-cases, and best practices for practical AI adoption.  Subscribe to Emerj's weekly AI newsletter by downloading our ""Beginning with AI"" PDF guide: https://emerj.com/beg1"	Daniel Faggella	821
236	7rSFD3ALMD2FKanlLeW9Uc	New Regulation on Institute of Health Data Research and Artificial Intelligence	"The Presidency of Health Institutes of Turkey published the Regulation on the Structure and Execution of the Activities of the Turkish Institute of Health Data Research and Artificial Intelligence Applications (""Regulation""). The Regulation sets forth important issues such as the duties and authorities of the Turkish Institute of Health Data Research and Artificial Intelligence Applications (""Institute"")."	2022-03-21T00:00:00	212271	Legal Alerts	As Esin Attorney Partnership, we are pleased to present our legal alerts summarizing significant legal and regulatory developments in Turkey. Disclaimer: Podcasts are not legal advice. Laws and regulations may change after a podcast is recorded.	Esin Attorney Partnership	150
237	2vg0U8okksAogpkQXbLQoq	From the AI Act to the DSA: Catching up on the EU's digital agenda	Though many privacy pros are still grappling with the EU General Data Protection Regulation, the EU is now busy leading a new generation of data regulations. As part of its Digital Single Market strategy, the EU is looking to not only protect data but also to create frameworks that allow for data flows, while aiming to mitigate hate speech and misinformation. Through an ambitious line of of proposed laws  including the Data Act, Data Governance Act, Digital Markets Act, Digital Services Act and the AI Act  the EU is poised to place a slew of new requirements for companies doing business in the region. Though not all privacy-related, privacy pros should be paying attention to this space. To catch up on this flurry of activity, IAPP Editorial Director recently chatted with journalist Luca Bertuzzi. 	2021-12-16T00:00:00	2886034	The Privacy Advisor Podcast	This podcast features host Jedidiah Bracy, editorial director at the International Association of Privacy Professionals, interviewing privacy and data protection professionals and thought leaders on the intersection of policy, technology and the law.	Jedidiah Bracy	87
238	3Mz0q3wnOAzT1Xqkg68cyU	Artificial Intelligence Part Two - Ethics and Regulations with Chatterbox Labs - Scott Jermy and Dr Stuart Battersby	Here on Candid Conversations we talk to changemakers about what is happening in their industry right now. In this episode we talk to Scott Jermy and Dr Stuart Battersby about: - Ethical concerns with AI - Regulations with AI - Bias and Data If you havent already, follow Candid Conversations or subscribe wherever you listen to your podcasts. Host: Caitlin Wood Audio By: Adrian Chin Quan For enquiries about the series please contact innovation@deloitte.com.au ADDITIONAL RESOURCES  2021 Deloitte Touche Tohmatsu. DISCLAIMER: This communication contains general information only, and none of Deloitte Touche Tohmatsu Limited (DTTL), its global network of member firms or their related entities (collectively, the Deloitte organisation) is, by means of this communication, rendering professional advice or services. Before making any decision or taking any action that may affect your finances or your business, you should consult a qualified professional adviser. No representations, warranties or undertakings (express or implied) are given as to the accuracy or completeness of the information in this communication, and none of DTTL, its member firms, related entities, employees or agents shall be liable or responsible for any loss or damage whatsoever arising directly or indirectly in connection with any person relying on this communication. DTTL and each of its member firms, and their related entities, are legally separate and independent entities.	2021-11-30T00:00:00	1355859	Candid Conversations	Here on Candid Conversations we talk to changemakers about what is happening in their industry right now. Delve into the future-focused topics every professional needs to know about, including AI, technology, data, cybersecurity and innovation.	Deloitte Australia Innovation	60
239	7ocQRZlam3V9yBxPwduo1g	Artificial Intelligence Part Two - Ethics and Regulations with Chatterbox Labs - Scott Jermy and Dr Stuart Battersby	Here on Candid Conversations we talk to changemakers about what is happening in their industry right now. In this episode we talk to Scott Jermy and Dr Stuart Battersby about: - Ethical concerns with AI - Regulations with AI - Bias and Data If you havent already, follow Candid Conversations or subscribe wherever you listen to your podcasts. Host: Caitlin Wood Audio By: Adrian Chin Quan For enquiries about the series please contact innovation@deloitte.com.au ADDITIONAL RESOURCES  2021 Deloitte Touche Tohmatsu. DISCLAIMER: This communication contains general information only, and none of Deloitte Touche Tohmatsu Limited (DTTL), its global network of member firms or their related entities (collectively, the Deloitte organisation) is, by means of this communication, rendering professional advice or services. Before making any decision or taking any action that may affect your finances or your business, you should consult a qualified professional adviser. No representations, warranties or undertakings (express or implied) are given as to the accuracy or completeness of the information in this communication, and none of DTTL, its member firms, related entities, employees or agents shall be liable or responsible for any loss or damage whatsoever arising directly or indirectly in connection with any person relying on this communication. DTTL and each of its member firms, and their related entities, are legally separate and independent entities.	2021-11-30T00:00:00	1355859	Candid Conversations	Here on Candid Conversations we talk to changemakers about what is happening in their industry right now. Delve into the future-focused topics every professional needs to know about, including AI, technology, data, cybersecurity and innovation.	Deloitte Australia Innovation	60
240	7hIu2JLizoSWNNwGqZYb8b	Artificial Intelligence: compliance & ethical issues	" TNP & its Data Protection team launched on May 27, 2020 ""Parlons RGPD & Scurit by TNP"", the first French podcast dedicated to personal data protection and cybersecurity. For this new episode of ""Let's talk about GDPR & Cybersecurity"", Federico Marengo and Jade Caboche, both consultants in data protection and Florence Bonnet, partner, discuss the compliance and ethical issues of artificial intelligence systems, and in particular the main aspects of the future European Regulation on the subject.Hberg par Ausha. Visitez ausha.co/politique-de-confidentialite pour plus d'informations."	2021-11-26T00:00:00	1718491	Parlons RGPD by TNP	 Parlons RGPD & Scurit , la premire mission de podcast franaise et indpendante 100% ddie  la protection des donnes personnelles et  la cyberscurit ! A travers cette mission, nos consultants TNP experts en protection des donnes et DPO dcryptent pour vous lactualit sous le prisme de la protection des donnes personnelles et des enjeux de cyberscurit. Retrouvez nos experts Data Protection by TNP sur le site www.tnpconsultants.com (http://www.tnpconsultants.com/) Hberg par Ausha. Visitez ausha.co/fr/politique-de-confidentialite pour plus d'informations.	TNP Consultants	29
241	70pTAnZWrYDGly6ovZbpKV	The singularity approaches! How can regulation keep up with artificial intelligence?	Artificial intelligence and technology are accelerating, public expectations are shifting, and competition law and regulation are scrambling to keep up. Our AI guru Peter Waters is here to tell us what we can do to prepare. Plus an antitrust hipster update, a sequel to Dieselgate, and decarbonisation dilemmas in the pipeline for the gas sector post COP26.  Subscribe to the podcast mailing list - https://bit.ly/3cG0E1j  Links from the episode:  Prof Anina Rich of Macquarie University on theBaader-Meinhof phenomenon 2020 Paper onCompetition Policy and Environmental Sustainabilityft. Simon Holmes Stanford University's2021 A.I. Index Report The Australian Energy Regulator'sRegulating Gas Pipelines Under Uncertainty: Information Paper Meet the Gilbert + TobinCompetition + Regulationteam See omnystudio.com/listener for privacy information.	2021-11-22T00:00:00	1493211	The Competitive Edge	Get the lowdown on developments in competition law in Australia and around the world with The Competitive Edge, a competition law podcast. Each fortnight Moya Dodd and Matt Rubinstein explore insights and trends with our resident experts and special guests to give you the competitive edge.	Gilbert + Tobin	52
242	1vFHxRr8RLZHtONkQ79AkW	103. Gillian Hadfield - How to create explainable AI regulations that actually make sense 	Its no secret that governments around the world are struggling to come up with effective policies to address the risks and opportunities that AI presents. And there are many reasons why thats happening: many people  including technical people  think they understand what frontier AI looks like, but very few actually do, and even fewer are interested in applying their understanding in a government context, where salaries are low and stock compensation doesnt even exist. So theres a critical policy-technical gap that needs bridging, and failing to address that gap isnt really an option: it would mean flying blind through the most important test of technological governance the world has ever faced. Unfortunately, policymakers have had to move ahead with regulating and legislating with that dangerous knowledge gap in place, and the result has been less-than-stellar: widely criticized definitions of privacy and explainability, and definitions of AI that create exploitable loopholes are among some of the more concerning results. Enter Gillian Hadfield, a Professor of Law and Professor of Strategic Management and Director of the Schwartz Reisman Institute for Technology and Society. Gillians background is in law and economics, which has led her to AI policy, and definitional problems with recent and emerging regulations on AI and privacy. But  as I discovered during the podcast  she also happens to be related to Dyllan Hadfield-Menell, an AI alignment researcher whom weve had on the show before. Partly through Dyllan, Gillian has also been exploring how principles of AI alignment research can be applied to AI policy, and to contract law. Gillian joined me to talk about all that and more on this episode of the podcast. --- Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters:   1:35 Gillians background  8:44 Layers and governments legislation  13:45 Explanations and justifications  17:30 Explainable humans   24:40 Goodharts Law   29:10 Bringing in AI alignment   38:00 GDPR   42:00 Involving technical folks   49:20 Wrap-up 	2021-11-17T00:00:00	3067377	Towards Data Science	Note: The TDS podcast's current run has ended.   Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.	The TDS team 	130
243	7FDFPXsK9kYuNyHlFFxfSF	 #14: Julia Reinhardt - What do GDPR and AI regulations mean for Silicon Valley?	In this episode of Voices of the Data Economy, we spoke to Julia Reinhardt. She is a San Francisco-based expert in Artificial Intelligence governance and privacy and public policy consultant. As a Mozilla Fellow in Residence, Julia assesses the opportunities and limitations of European approaches on trustworthy Artificial Intelligence in Silicon Valley and their potential for U.S. businesses and advocacy. During our conversation, Julia spoke about the different facets of GDPR impact on Silicon Valley; and the challenges of upcoming AI regulation. Voices of Data Economy is supported by Ocean Protocol Foundation. Ocean is kickstarting a Data Economy by breaking down data silos and equalizing access to data for all. This episode was hosted by Diksha Dutta and audio engineering by Aneesh Arora.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/dataeconomy/message	2021-08-05T00:00:00	2631863	Voices of the Data Economy	Listen to voices shaping the Data Economy and challenging it at the same time. Host Diksha Dutta talks to researchers & Data Scientists; tech policy experts; and pioneers in Decentralization - all sharing their relationship with data. We talk about breaking down data silos and equalizing access to data for all. Voices of the Data Economy is a podcast supported by the Ocean Protocol foundation.	Diksha Dutta 	23
244	71ZywNxSZWpo6IBRP6p4f4	AI Today Podcast: Revisiting Worldwide Laws and Regulation around Data and AI	In this episode of the AI Today podcast well share some of our insights and key findings from a recent Cognilytica market intelligence coverage area. Well be digging deeper into the state of worldwide laws and regulations and in particular data and AI laws. In recent research, Cognilytica analyzed over 200 countries with specific emphasis of various regions, states, and territories to figure out the real state with regards to laws and regulations around AI and related areas. Continue reading AI Today Podcast: Revisiting Worldwide Laws and Regulation around Data and AI at Cognilytica.	2021-06-03T00:00:00	2546744	AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion	AI Today Podcast	AI & Data Today	447
245	4VkpJADE74cyfa3WNA2IMe	Checking AI bias is a job for the humans	By pre-processing or post-processing data, or even setting datasets to expire, humans can step in to correct machine learning models  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2021-05-01T00:00:00	343632	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
246	139Sr3w71SKV9ovc1JAvjT	A.I EP-06 | Bias, Variance & Tradeoff	Short & Clearcut info about Bias, Variance & Tradeoff	2021-03-29T00:00:00	211998	Artificial Intelligence 	This Podcast will give you short & Clear-cut information about A.I & its Terms	Pantech Solutions	24
247	6OUdMQJtFGTAjBoMW2cstU	Why humans and AI are stuck in a standoff against fake news	Fake news is a scourge on the global community.  ---   Send in a voice message: https://podcasters.spotify.com/pod/show/tonyphoang/message	2021-03-17T00:00:00	475611	The Artificial Intelligence Podcast	News, Hot Topics and Trends in Artificial Intelligence, Machine Learning and Data Science	Dr. Tony Hoang	626
248	3d8RK1oW94zoyBQmz2RSsh	002 - Guest: Audrey Tang, Digitial Information Minister for Taiwan	"What does one of the nations that has most successfully defended itself against the coronavirus have to teach us about AI? Plenty, when the architect of their digital response to COVID-19 is also a technology genius. In this episode, I talk with Audrey Tang, Taiwan's Digital Information minister. She's responsible for Taiwan's ""Digital Democracy"" and has a sky-high IQ. We talk about what digital democracy is, how it powered Tawian's virus defense, and how it also defeats disinformation campaigns ranging from conspiracy rumors to information warfare. What is the responsibility of government to its citizens as technology advances, if that advance may leave some behind? We get into that and much more in this episode. For more on Audrey's ""humor vs rumor"" approach to combating information warfare, see https://www.youtube.com/watch?v=ClmT6bZX5yE. Transcript at HumanCusp Blog. Image credit: Wikipedia.   "	2020-06-29T00:00:00	1995157	Artificial Intelligence and You	What is AI? How will it affect your life, your work, and your world?	aiandyou	206
249	1wpKsb5SZWLyKlCsGlb8uq	Artificial Intelligence and the Future of Regulation	On November 15, 2019, the Gray Center hosted a public policy conference on Technology, Innovation, and Regulation. For this conference, scholars wrote and presented papers on the way regulation affects technological innovation, and vice-versa. The Gray Center convened expert panels on topics including whether social media should be regulated for neutrality, regulatory sandboxes and other... Source	2020-04-16T00:00:00	4720248	Gray Matters	The C. Boyden Gray Center for the Administrative State, at George Mason Universitys Antonin Scalia Law School, supports research and debate on the modern administrative state, and the constitutional issues surrounding it. In this podcast, well discuss some of the questions being debated around modern administration  some new questions, some timeless ones. And you can also get the audio from Gray Center events.	The C. Boyden Gray Center for the Administrative State	146
250	0XobSAE6ifgZIjUYcfxNTJ	Artificial Intelligence, Ethics and Regulations	When it comes to artificial intelligence there's good news and bad news. On the plus side, AI could save millions of lives a year by putting robots behind the wheels of cars or helping scientists discover new medicines. On the other hand, it could put us under surveillance, because a computer thinks our recent behavior patterns suggest we might be about to commit a crime.	2019-04-12T00:00:00	275342	RoozCast	Roozbeh Aliabadi is an advisor and commentator on geopolitical risk and geoeconomics, particularly the Middle East and Asia. He is an advisory partner in the global advisory practice at GGA, a boutique international consultancy practice based in New York City.	Roozbeh Aliabadi	118
251	2VRyO2rT5hdow9iS6vBIxb	Ethical Conundrums, Government Regulation, and the Future of AI with Toby Walsh	Today, were chatting with AI expert, activist, and author Toby Walsh. Toby is a professor of artificial intelligence at the University of New South Wales and Data61 with an educational background from the University of Cambridge and University of Edinburgh in mathematics, theoretical physics, and artificial intelligence. He has also been Editor-in-Chief of the Journal of Artificial Intelligence Research and has chaired multiple AI conferences. As an activist, Toby helped in the release of an open letter which gained over 20,000 signatures that called for a ban on offensive autonomous weapons. Toby is also the author of multiple books on artificial intelligence, the most recent of which is 2062: The World That AI Made which we will discuss today. Dr Walsh opens his thoughts on the future of AI, when and how it could surpass humans, and how little we know about how it could impact our jobs. Toby also discusses with us other ways AI can impact society for better or for worse. How can our data privacy impact AI and how can microtargeting using this data change the course of history? What ethics should we stand by? Who is responsible for decisions made by autonomous machines? And could government regulation actually helprather than hinderinnovation?   Insight into Toby's most recent book, 2062: The World That AI Made, touted as the book to read, bar none, on AI and society. Will AI have consciousness? Is consciousness a biological construct? AI's impact on jobs. Will more jobs be displaced than created? Could AI lead to the second Renaissance when people focus on what's truly important? Which fictional future will AI lead us most closely to? Blade Runner, A Space Odyssey, Altered Carbon, The Jetsons, etc.? How did Toby become involved in AI? How did he become an activist? Which legal and ethical issues do we need to look at surrounding AI? AI is being used to target suspicious people for criminal surveillance. Bosses are now sometimes algorithms such as is the case with Uber. Should we regulate data monopolies? What is happening to our data privacy and how is it used to manipulate us? Cambridge Analytica and Facebook accused of involvement in manipulating elections through microtargeting using collected personal data. What is the regulatory landscape in Australia? How has it been impacted by GDPR? How has data been used illegally to discriminate based on race, gender, and other similar factors? What is the potential impact of government regulation on automation in travel? How could the safety of transportation increase through regulated data sharing in automated cars and planes? What are the advantages of machines over humans? How will global learning change humanity? Do we want humans to be manipulated to such an extreme degree? Can humans be hacked? Should we regulate weapons of mass persuasion? How did AlphaGo create a Sputnik Moment for AI? What impacts has it had? Is robotic soccer Australia's Sputnik Moment in AI?  For full show notes and resources head to:eliiza.com/podcast/episode-3	2018-12-10T00:00:00	3389805	AI Australia Podcast	On this podcast, your hosts James Wilson and Nigel Dalton have in-depth conversations with technology leaders, academics, and AI professionals about all things artificial intelligence. Well explore a broad range of topics including AI strategy, technology and ethics providing valuable insights for Australian businesses at all stages of AI adoption.	James Wilson	48
252	6OqWczrblFGPQvxdXxFFpr	AI in Industry: How AI Ethics Impacts the Bottom Line - An Overview of Practical Concerns	This week on AI in Industry, we are talking about the ethical consequences of AI in business. If a system were to train itself to act in unethical or legally reprehensible ways, it could take actions such as filtering or making decisions about people in regards to race or gender. When machine learning is integrated into technology products, could a misbehaving system put the company at financial and legal risk? Our guest this week, Otto Berkes, Chief Technology Officer of New York-based CA Technologies, speaks to us about realistic changes in the technology planning and testing process that leaders need to consider. We discussed how businesses could integrate machine learning into the products and services, while still protecting themselves from potential legal downsides. See the full interview article featuring Otto Berkes live at:https://www.techemergence.com/?p=13752&preview=true	2018-08-20T00:00:00	1588976	The AI in Business Podcast	"The AI in Business Podcast is for non-technical business leaders who need to find AI opportunities, align AI capabilities with strategy, and deliver ROI.  Each week, Emerj Artificial Intelligence Research CEO Daniel Faggella interviews top AI executives from Fortune 500 firms and unicorn startups - to uncover trends, use-cases, and best practices for practical AI adoption.  Subscribe to Emerj's weekly AI newsletter by downloading our ""Beginning with AI"" PDF guide: https://emerj.com/beg1"	Daniel Faggella	821
253	4k9Q8zHfUGEF3htpDIM0Xa	GDPR	By now, you have probably heard of GDPR, the EU's new data privacy law. It's the reason you've been getting so many emails about everyone's updated privacy policy.  In this episode, we talk about some of the potential ramifications of GRPD in the world of data science.	2018-06-11T00:00:00	1103986	Linear Digressions	Linear Digressions is a podcast about machine learning and data science.  Machine learning is being used to solve a ton of interesting problems, and to accomplish goals that were out of reach even a few short years ago.	Ben Jaffe and Katie Malone	291
