Can AI and Human Creativity Coexist? The Battle for the Future of Visual Content
Can AI and human creativity truly coexist—or are we watching the beginning of the end for original artistry?
In this episode of Today in Tech, host Keith Shaw dives deep into the future of visual content with Allesandra Sala, Shutterstock’s Head of AI and Data Science. Together, they explore how generative AI is transforming the creative industry — from image perfection and stock photography disruption to copyright chaos, ethical dilemmas, and artistic identity.
Discover:
Why Shutterstock chose to embrace, not resist, generative AI
How AI-generated content is both exciting and dangerously generic
The ongoing legal battle over AI authorship and content ownership
How artists can stay relevant (and possibly even thrive) with AI
What ethical guardrails and transparency measures are needed now
Whether a backlash to “too perfect” imagery is already underway
Follow TECH(talk) for the latest tech news and discussion!
--------
36:14
--------
36:14
Inside Shadow AI: The Hidden Cyber Threat Already Inside Your Company
Shadow AI is already inside your company—and your security team can’t see it. Employees are using AI tools without approval, confidential data is leaking into public LLMs, and attackers are weaponizing AI faster than we can secure it. In this episode of Today in Tech, host Keith Shaw is joined by Etay Maor of Cato Networks, a cybersecurity expert and adjunct professor at Boston College, to reveal how Shadow AI is now one of the biggest threats to enterprise security.
We discuss how AI tools slip past IT monitoring, why AI is now the weakest link, how attackers jailbreak AI models, and why agentic AI could open the next wave of cyberattacks. Etay also shares real-world cybercrime examples using AI—and what companies MUST do now to gain AI visibility, enforce policies, and prevent data leaks.
Topics Covered:
What is Shadow AI and why is it dangerous?
38% of employees sharing sensitive data with AI tools
Why 90% of enterprise AI use is invisible
AI misuse by employees and insider risks
Jailbroken AI models and zero-knowledge threat actors
AI-powered phishing, deepfakes & identity fraud
Agentic AI and excessive permissions
How to monitor, detect and contain Shadow AI
--------
45:43
--------
45:43
Why AI upskilling is failing, and how you can fix it | Ep. 255
In this episode of Today in Tech, host Keith Shaw is joined by Yvette Brown, co-founder of XPROMOS and a leading voice in generative AI education. They dive deep into the growing disconnect between AI adoption and employee readiness — with new research revealing that many AI projects are failing because upskilling efforts are falling short.
Yvette breaks down:
* Why relying on a “Debbie the AI gal” approach won't scale
* How AI “work slop” is flooding organizations with low-quality content
* What causes the “garbage in, garbage out” problem
* Why iteration, specificity, and context are critical when prompting
* The surprising power of tools like deep research and agentic AI pilots
They also explore practical AI fluency tips for marketers, managers, and knowledge workers, plus discuss whether the holiday shopping season could be a breakthrough moment for consumer-facing AI agents.
Don’t miss this episode if you care about:
* Upskilling your team for AI success
* Avoiding common prompt engineering mistakes
* Using AI as a true collaborator — not just a shortcut
* Navigating the rise of agentic AI safely
Watch now and take on Yvette’s AI homework challenge: Ask an AI to analyze your job and help you work smarter.
--------
46:10
--------
46:10
The hidden legal dangers of AI hiring tools, agentic decision-making | Ep. 254
As companies rush to implement AI and automated decision-making tools, they may be walking into a legal minefield. On this episode of Today in Tech, host Keith Shaw speaks with attorney Rob Taylor from Carstens, Allen & Gourley about the growing legal risks tied to agentic AI, automated hiring, and the rise of ADM (automated decision-making) regulations.
Rob breaks down:
* Why AI tools used in hiring and insurance may trigger liability
* How companies are getting ADM compliance wrong
* What laws already apply even without new AI regulations
* Real-world examples like credit scoring, job screening, and sentiment analysis
* Why disclosure, explainability, and data retention are essential
* Who’s liable: the company or the AI developer?
Chapters
00:00 Legal risks in AI and ADM
01:00 Common mistakes companies make
06:00 High-risk use cases: hiring, credit, insurance
10:00 Disclosure and consent pitfalls
15:00 Explainability and record-keeping laws
20:00 Unintentional bias in hiring algorithms
28:00 Who is liable: developer or deployer?
34:00 What future lawsuits might target
37:00 Fixing flawed AI governance
41:00 Litigation as the great teacher
--------
44:30
--------
44:30
Why Zero Trust is struggling, and how AI could save it | Ep. 253
Zero trust was once the leading cybersecurity strategy, but has it lost momentum? In this episode of Today in Tech, host Keith Shaw speaks with Morey Haber, Chief Security Advisor at BeyondTrust, about whether zero trust is failing or simply misunderstood.
They explore why many companies struggle to implement zero trust effectively, the gap between intention and execution, and how vendor marketing may have added confusion to the conversation. Morey explains why identity and privileged access management are now critical, how lateral movement works during attacks, and why many AI agents are dangerously over-privileged.
Topics include:
The misconception that zero trust is a product
How AI is reshaping the need for zero trust
The role of identity in modern cybersecurity
Real-world deployment challenges and mistakes
Why secure-by-design is often an afterthought
This episode is ideal for IT leaders, cybersecurity professionals, and anyone looking to better understand how zero trust fits into a world increasingly influenced by AI.
Host Keith Shaw and his expert guests discuss the latest technology news and trends happening in the industry. Watch new episodes twice each week or listen to the podcast.