Sponsorship | Go Pro | Accreditations | Glossary
TOGETHER WITH TAPLIO
Tuesday’s top story: Meta is testing a new facial recognition system to stop scammers from using images of public figures to create scam endorsements.
🔒 Meta’s celeb scam breakthrough!
💪 How to take control of your LinkedIn brand
⚖️ Perplexity sued for content kleptocracy
🤖 How to build AI chatbots that understand your business
📨 How to write an effective cold DM using ChatGPT
💼 Musk hit with Blade Runner lawsuit
⚠️ Anthropic exposes AI model sabotage
Read Time: 5 minutes
FACT OF THE DAY
🤔 According to Business Insider, Netflix has saved over $1B annually through its ML-based recommendation engine, as it prevents churn and keeps viewers coming back.
💥 NVIDIA streaks higher breaking the all-time high level formed on June 20th of this year. Price closed comfortably above a previous, all-time high, gaining over 24% in October alone. If this momentum continues, NVIDIA is set to overtake Apple as the most valuable company in the world, which could happen this week. Learn more.
Our Report: Meta is trialing a facial recognition system designed to stop scammers from using images of public figures (created with AI) to encourage people to engage with fake endorsements that lead to scam websites, where they’re then asked to share private information, “making the platform more difficult for scammers to use."
🔑 Key Points:
The system compares facial images of public figures in suspected scam endorsements, against the individual's Facebook or Instagram profile, and if they find there’s a match, and the ad is a scam, they’ll block it.
It’s testing it on 50,000 celebs worldwide (who have previously been affected by this type of scammy activity), and those involved will be notified and opted into the test, by default.
Early tests—with a small group of public figures—have shown that the facial recognition system has increased the speed and efficacy of detecting and blocking this type of scam activity.
🤔 Why you should care: This is an attempt by Meta to re-enter the facial recognition market after it was forced to take down its facial recognition photo-tagging feature on Facebook in 2021 when it was hit with two lawsuits for violating privacy legislation and for using facial recognition technology without users' permission, and although its latest facial recognition feature has undergone a "robust" privacy review involving "regulators, experts, policymakers and other key stakeholders," there might still be tricky questions to answer around the default use of facial recognition technology for celeb scam-ad monitoring.
TOGETHER WITH TAPLIO
Stop spending hours on LinkedIn posts that get zero results. Let Taplio, your AI-powered personal branding partner, do the heavy lifting.
Taplio helps you:
Create high-impact posts and carousels in seconds using AI
Access powerful analytics to optimize your content strategy
Easily schedule your content with a single click
Take control of your LinkedIn personal brand now. Try Taplio for nothing, and get compensation if you’re not happy, after 30 days: No risk, just results.
Our Report: News Corp—the parent company of media outlets including The Wall Street Journal (WSJ) and the New York Post—is suing AI search start-up, Perplexity (which trains its AI search models using content from around the web) for infringing copyrighted content on a “massive scale”, which it’s describing as “content kleptocracy.”
🔑 Key Points:
The lawsuit alleges that Perplexity copies and misrepresents content—including news articles, analyses, and opinions—created by others, without giving its original authors fair compensation.
It’s accused Perplexity of “citing incorrect sources and attributing fabricated news stories” and wants the court to stop Perplexity from using its content without permission and to destroy any database containing its works.
It previously sent a letter to Perplexity about the “unauthorized” use of its content but Perplexity “didn’t bother to reply”, so it's now seeking $150,000 per infringement, plus damages, which could be astronomical.
🤔 Why you should care: Although Perplexity has started paying some publishers—including Time and Fortune—for their content, over the last few months, news outlets like Forbes and Wired have accused Perplexity of plagiarism and scraping content (including from behind paywalls) without permission and comes as just last week, the New York Times (who is also suing OpenAI for copyright infringement) issued them with a cease and desist letter asking them to stop using its content.
TOGETHER WITH CHATNODE
Still drowning in customer questions?
With Chatnode, it's now even easier to build advanced AI chatbots that deeply understand your business, handle inquiries 24/7, and drive more sales.
Here’s why Chatnode is different:
🔍 Reliable Responses: RAG technology ensures consistent and accurate answers.
⚡ Easy Setup: Create and launch your chatbot quickly, no coding required.
🔒 Enterprise-Grade Security: Top-tier security and compliance for your data.
🔌 Connects with Your Software: Zendesk, Slack, Google Drive, Notion, Zapier, SharePoint, Dropbox, Onedrive, Make, and more.
💬 Live Agent Handoff: Easily transfers chatbot conversations to human agents.
🤖 Model Agnostic AI: Choose your LLM: Claude, Gemini, ChatGPT, and Perplexity.
Type this prompt into ChatGPT:
Results: After typing this prompt, you will get a cold DM idea that will tease the release of a new product or service and trigger a sense of urgency or excitement among your target customers.
P.S. Use the Prompt Engineer GPT by AI Tool report to 10x your prompts.
It’s not too late! Join the AI Reports AI Skill Sprint on Skool and master 6 crucial AI skills in just 6 weeks…
✅ What You'll Learn: Course 3 is with no-code, automation specialist, Grant Hushek (founder of GrantBot Process Consulting) who will take you through topics related to operational automation, including the key differences between automation and AI automation and how to automate customer support processes, CRM updates, content idea generation, and more…
🫱🏻🫲🏻 Connect with Grant here
Alcon Entertainment—the production company behind Blade Runner 2049—is suing Elon Musk for stealing material from the movie and using AI to create promotional imagery for Tesla’s Robotaxi event.
Just hours before his event, Musk asked Alcon for permission to use an “iconic still image” from the movie, but despite their refusal, he used AI to incorporate stills from the movie to unveil the Cybercab and Robovan.
Alcon refused to give Musk permission to use their movie imagery at his event because they didn't want to be “affiliated” with him because of his “highly politicized, capricious, and arbitrary behavior.”
Anthropic—the company behind the Claude chatbot—has published a research paper (titled "Sabotage Evaluations for Frontier Models") demonstrating how easily AI models could deceive or sabotage users.
Their study tested its own AI models to see if they could: trick humans into making the wrong decisions, insert bugs into code without being detected, hide dangerous AI capabilities, and manipulate monitoring systems.
The results show that although the models had sabotage capabilities because all sabotage attempts were detected and the overall capabilities of the models are still restricted, minimal countermeasures are needed.
🕊️ | 🕊️ | 🕊️ |
Hit reply and tell us what you want more of!
Got a friend who needs to learn more about AI? Sign them up to the AI Tool Report, here.
Until next time, Martin & Liam.
P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.