Partner im RedaktionsNetzwerk Deutschland
Radio Logo
De stream van het station begint in null second(en).
Luister naar The Marketing AI Show in de app
Luister naar The Marketing AI Show in de app
(171.489)
Favorieten opslaan
Wekker
Slaaptimer
Favorieten opslaan
Wekker
Slaaptimer

The Marketing AI Show

Podcast The Marketing AI Show
Podcast The Marketing AI Show

The Marketing AI Show

Marketing AI Institute
toevoegen
The Marketing AI Show makes artificial intelligence actionable and approachable for marketers. Brought to you by the creators of the Marketing AI Institute and ... Meer
The Marketing AI Show makes artificial intelligence actionable and approachable for marketers. Brought to you by the creators of the Marketing AI Institute and ... Meer

Beschikbare afleveringen

5 van 51
  • #50: Prompt Engineering Best Practices from OpenAI, How GPT-4 Could Reshape Healthcare, and The Hidden Costs of AI Adoption
    Thanks for joining us for episode 50! While AI breakthroughs slowed down this week, insights, best practices, and conversations continued. Paul Roetzer and Mike Kaput catch up on the artificial intelligence news impacting marketing and business leaders. OpenAI dropped chat prompt suggestions Logan Kilpatrick from OpenAI gave us helpful tips on crafting prompts. Quite simply (or so it seems), Kilpatrick offers six strategies for getting better results: write clear instructions, provide reference text, split complex tasks into simpler subtasks, give GPTs time to "think", use external tools, and test changes systematically. Is it that easy? What has OpenAI learned, and how can marketers follow these strategies while still differentiating themselves?  Could generative AI transform healthcare?  Could generative AI transform healthcare for the better? One expert thinks so. Dr. Robert M. Wachter, professor, and chair of the Department of Medicine at the University of California, San Francisco, outlines why in a new essay commissioned by Microsoft. In it, Dr. Wachter says he’s optimistic that generative AI systems like GPT-4 have the potential to reshape how healthcare works. This article caught Paul’s attention, and Paul and Mike break it down on the podcast, discussing not only marketing but also better patient outcomes and a reduction in healthcare costs.  High costs and AI adoption According to a new report from The Information: “More than 600 of Microsoft’s largest customers, including Bank of America, Walmart, Ford, and Accenture, have been testing the AI features in its Microsoft Office 365 productivity apps, and at least 100 of the customers are paying a flat fee of $100,000 for up to 1,000 users for one year, according to a person with direct knowledge of the pilot program.” The proposed pricing models for AI features will impact business leaders' decision-making regarding AI adoption, especially small businesses. This helpful episode of The Marketing AI Show can be found on your favorite podcast player and be sure to explore the links below.
    6-6-2023
    42:35
  • #49: Google AI Ads, Microsoft AI Copilots, Cities and Schools Embrace AI, Top VC’s Best AI Resources, Fake AI Pentagon Explosion Picture, and NVIDIA’s Stock Soars
    Google Introduces AI-Powered Ads Google just announced new AI features within Google Ads, from landing page summarizations to generative AI helping with relevant and effective keywords, headlines, descriptions, images, and other assets for your campaign. Microsoft Rolls Out AI Copilots and AI Plugins Two years ago, Microsoft rolled out its first AI “copilot,” and this year, Microsoft introduced other copilots across core products and services, including AI-powered chat in Bing, Microsoft 365 Copilot, and others across products like Microsoft Dynamics and Microsoft Security. Cities and Schools Embrace Generative AI We see some very encouraging action from schools and cities regarding generative AI. According to Wired, New York City Schools have announced they will reverse their ban on ChatGPT and generative AI. Additionally, the City of Boston's chief information officer sent guidelines to every city official encouraging them to start using generative AI to understand its potential. AI Resources from Andreessen Horowitz Andreessen Horowitz recently shared a curated list of resources, their “AI Canon,” they’ve relied on to get smarter about modern AI. It includes papers, blog posts, courses, and guides that have had an outsized impact on the field over the past several years. DeepMind’s AI Risk Early Warning System In DeepMind’s latest paper, they introduce a framework for evaluating novel threats–misleading statements, biased decisions, or repeating copyrighted content–co-authored with colleagues from a number of universities and organizations. OpenAI’s Thoughts on the Governance of Superintelligence Sam Altman, Greg Brockman, and Ilya Sutskever recently published their thoughts on the governance of superintelligence. They say that proactivity and mitigating risk are critical, alongside special treatment and coordination of superintelligence.  White House Takes New Steps to Advance Responsible AI Last week, the Biden-Harris Administration announced new efforts that “will advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals’ rights and safety and delivers results for the American people.” This includes an updated roadmap and a new report on the risks and opportunities related to AI in education. Fake Image of Pentagon Explosion Causes Dip in the Stock Market A fake image purporting to show an explosion near the Pentagon was shared by multiple verified Twitter accounts on Monday, causing confusion and leading to a brief dip in the stock market. Based on the actions and reactions of the day, are we unprepared for this technology? Meta’s Massively Multilingual Speech Project Meta announces their Massively Multilingual Speech (MMS) project, combining self-supervised learning, a new dataset that provides labeled data for over 1,100 languages and unlabeled data for nearly 4,000 languages, as well as publicly sharing models and code so that others in the research community can build upon Meta’s work. More Funding Rounds Anthropic raised $450 million in Series C funding. Figure Raises $70M Series A to accelerate robot development, fund manufacturing, design an end-to-end AI data engine, and drive commercial progress. OpenAI CEO Sam Altman has raised $115 million in a Series C funding round for Worldcoin which aims to distribute a crypto token to people "just for being a unique individual." NVIDIA Stock Soars on historic earnings report Nvidia’s stock blew past already-high expectations last Wednesday in its earnings report. Dependency on Nvidia is so widespread that Big Tech companies have been working on developing their own competing chips, much like Apple spent years developing its own chips so it could avoid having to rely on — and pay — other companies to outfit its devices.
    30-5-2023
    59:45
  • #48: Artificial Intelligence Goes to Washington, the Biggest AI Safety Risks Today, and How AI Could Be Regulated
    AI came to Washington in a big way. OpenAI CEO Sam Altman appeared before Congress for his first-ever testimony, speaking at a hearing called by Senators Richard Blumenthal and Josh Hawley. The topic? How to oversee and establish safeguards for artificial intelligence. The hearing lasted nearly three hours and focused largely on Altman, though Christina Montgomery, an IBM executive, and Gary Marcus, a leading AI expert, academic, and entrepreneur, also testified. During the hearing, Altman covered a wide range of topics, including a discussion of different risks posed by generative AI, what should be done to address those risks, and how companies should develop AI technology. Altman even suggested that AI companies be regulated, possibly through the creation of one or more federal agencies and/or some type of licensing requirement. The hearing was divisive. Some experts applauded what they saw as much-needed urgency from the federal government to tackle important AI safety issues. Others criticized the hearing for being far too friendly, citing worries that companies like OpenAI are angling to have undue influence over the regulatory and legislative process. An important note: This hearing appeared to be informational in nature. It was not called because OpenAI is in trouble. And it appears to be the first of many such hearings and committee meetings on AI that will happen moving forward. In this episode, Paul and Mike tackled the hearing from three different angles as our three main topics today, as well as talked about a series of lower-profile government meetings that occurred. First, they do a deep dive into what happened, what was discussed, and what it means for marketers and business leaders.  Then they took a closer look at the biggest issues in AI safety that were discussed during the hearing and that the hearing is designed to address. At one point during the hearing, Altman said "My worst fear is we cause significant harm to the world.” Lawmakers and the AI experts at the hearing cited several AI safety risks they’re losing sleep over. Overarching concerns included election misinformation, job disruption, copyright and licensing, generally harmful or dangerous content, and the pace of change.  Finally, Paul and Mike talked through the regulatory measures proposed during the hearing and what dangers there are, if any, of OpenAI or other AI companies tilting the regulatory process in their favor. Some tough questions were raised in the process. Senate Judiciary Chair Senator Dick Durbin suggested the need for a new agency to oversee the development of AI, and possibly an international agency. Gary Marcus said there should be a safety review, similar to what is used with the FDA for drugs, to vet AI systems before they are deployed widely, advocating for what he called a “nimble monitoring agency.” On the subject of agencies, Senator Blumenthal cautioned that the agency or agencies must be well-resourced, with both money and the appropriate experts. Without those, he said, AI companies would “run circles around us.” As expected, this discussion wasn’t without controversy. Tune in to this critically important episode of The Marketing AI Show. Find it on your favorite podcast player and be sure to explore the links below. Listen to the full episode of the podcast Want to receive our videos faster? SUBSCRIBE to YouTube!  Visit our website Receive our weekly newsletter Register for a free webinar Come to our next Marketing AI Conference Enroll in AI Academy for Marketers Join our community on Slack, LinkedIn, Twitter, Instagram, and Facebook.
    23-5-2023
    54:18
  • #47: Huge Google AI Updates, Teaching Large Language Models to Have Values, and How AI Will Impact Productivity and Labor
    Another week of big news from Google Google just announced major AI updates, including an AI makeover of search. The updates were announced at Google’s I/O developers conference and some of the more important updates were discussed on the podcast.  A new next-generation large language model called PaLM 2, “excels at advanced reasoning tasks, including code and math, classification and question answering, translation and multilingual proficiency better than our previous state-of-the-art LLMs.” Next, an AI makeover of search through Google’s “Search Generative Experience” will deliver conversational results to search queries. This will become available to users who sign up for Google’s Search Labs sandbox. Additional improvements include new AI writing tools for Gmail, the removal of the waitlist for Bard, and the ability to create full documents, generate slides, and fill in spreadsheets across tools like Docs, Slides, and Sheets.  What’s next for Claude Anthropic, a major AI player and creator of the AI assistant “Claude,” just published research that could have a big impact on AI safety. In the research, the company outlines an approach they’re using “Constitutional AI,” or the act of giving a large language model “explicit values determined by a constitution, rather than values determined implicitly via large-scale human feedback.” This concept is designed to address the limitations of large-scale human feedback, which traditionally determines the values and principles of AI behavior. It aims to enhance the transparency, safety, and usefulness of AI models while reducing the need for human intervention. The constitution of an AI model consists of a set of principles that guide its outputs, and in Claude’s case, encourages the model to avoid toxic or discriminatory outputs, refrain from assisting in illegal or unethical activities, and aim to be helpful, honest, and harmless. Anthropic emphasizes that this living document is subject to revisions and improvements based on further research and feedback. More on the economy and knowledge workers  In a recent Brookings Institution article titled, Machines of Mind: The Case for an AI-powered Productivity, the authors explore the potential impact of AI, specifically large language models (LLMs), on the economy and knowledge workers. The authors predict LLMs will have a massive impact on knowledge work in the near future. They say: “We expect millions of knowledge workers, ranging from doctors and lawyers to managers and salespeople to experience similar ground-breaking shifts in their productivity within a few years, if not sooner.” The productivity gains from AI will be realized directly through output created per hour worked (i.e. increased efficiency), and indirectly through accelerated innovation that drives future productivity growth. The authors say they broadly agree with a recent Goldman Sachs estimate that AI could raise global GDP by a whopping 7%. But there’s more to it, so be sure to tune in. Listen to the full episode of the podcast: https://www.marketingaiinstitute.com/podcast-showcase Want to receive our videos faster? SUBSCRIBE to our channel! Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar  Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
    16-5-2023
    54:10
  • #46: Geoff Hinton Leaves Google, Google and OpenAI Have “No Moat,” and the Most Exciting Things About the Future of AI
    Hinton departs Google Geoffrey Hinton, a pioneer of deep learning and a VP and engineering fellow at Google, has left the company after 10 years due to new fears he has about the technology he helped develop.  Hinton says he wants to speak openly about his concerns, and that part of him now regrets his life’s work. He told MIT Technology Review: “I have suddenly switched my views on whether these things are going to be more intelligent than us. I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?” He worries that extremely powerful AI will be misused by bad actors, especially in elections and war scenarios, to cause harm to humans. He’s also concerned that once AI is able to string together different tasks and actions (like we’re seeing with AutoGPT), intelligent machines could take harmful actions on their own. This isn’t necessarily an attack on Google specifically. Hinton said that he has plenty of good things to say about the company. But he wants “to talk about AI safety issues without having to worry about how it interacts with Google’s business.” “No Moats” “We have no moat, and neither does OpenAI,” claims a leaked Google memo revealing that the company is concerned about losing the AI competition to open-source technology. The memo, led by a senior software engineer, states that while Google and OpenAI have been focused on each other, open-source projects have been solving major AI problems faster and more efficiently.  The memo’s author says that Google's large AI models are no longer seen as an advantage, with open-source models being faster, more customizable, and more private. What do these new developments and rapid shifts mean?  The exciting future of AI  We talk about a lot of heavy AI topics on this podcast—and it’s easy to get concerned about the future or overwhelmed. But Paul recently published a LinkedIn post that’s getting much attention because it talks about what he’s most excited about AI.  Paul wrote, “Someone recently asked me what excited me most about AI. I struggled to find an answer. I realized I spend so much time thinking about AI risks and fears (and answering questions about risks and fears), that I forget to appreciate all the potential for AI to do good. So, I wanted to highlight some things that give me hope for the future…” We won’t spoil it in this blog post, so tune in to the podcast to hear Paul’s thoughts.  Listen to this week’s episode on your favorite podcast player and be sure to explore the links below for more thoughts and perspectives on these important topics. Visit our website: https://www.marketingaiinstitute.com Receive our weekly newsletter: https://www.marketingaiinstitute.com/newsletter-subscription Looking for content and resources? Register for a free webinar: https://www.marketingaiinstitute.com/resources#filter=.webinar  Come to our next Marketing AI Conference: www.MAICON.ai Enroll in AI Academy for Marketers: https://www.marketingaiinstitute.com/academy/home Join our community: Slack: https://www.marketingaiinstitute.com/slack-group-form LinkedIn: https://www.linkedin.com/company/mktgai Twitter: https://twitter.com/MktgAi Instagram: https://www.instagram.com/marketing.ai/ Facebook: https://www.facebook.com/marketingAIinstitute
    9-5-2023
    51:29

Meer Zaken en persoonlijke financiën podcasts

Over The Marketing AI Show

The Marketing AI Show makes artificial intelligence actionable and approachable for marketers. Brought to you by the creators of the Marketing AI Institute and the Marketing AI Conference (MAICON), join us for weekly conversations with top authors, entrepreneurs, AI researchers, and executives as they share case studies, strategies, and technologies that have the power to transform your business, and career.
Podcast website

Luister naar The Marketing AI Show, Der Podcast für Smart Digitals en vele andere stations van over de hele wereld met de radio.net-app

The Marketing AI Show

The Marketing AI Show

Download nu gratis en luister gemakkelijk naar radio.

Google Play StoreApp Store