Sponsorship | Go Pro | Glossary
TOGETHER WITH INCOGNI
Friday’s top story: Google is reportedly using Anthropic’s chatbot—Claude—to evaluate its own AI model—Gemini—to try and improve its performance.
🔏 We recommend you try Incogni to protect your information from online scammers.
☝️ Google using Claude to improve Gemini?
🔏 How to protect your personal information online, with Incogni
⚠️ ChatGPT Search: Major flaw revealed
🧠 How to become the smartest person in the room with Outread
🧑🦳 How to integrate personalization features using ChatGPT
💲OpenAI and Microsoft redefine AGI?
😱 ChatGPT’s Christmas outage!
Read Time: 5 minutes
FACT OF THE DAY
🤔 According to Synthesia, NVIDIA’s AI servers need at least 85.4 terawatt-hours of electricity p/yr, which is equivalent to the amount of electricity the Netherlands uses in a year.
🍎 AI and tech stocks remained neutral on Boxing Day as trading volumes slowed after the markets re-opened. The main mover of the day was Apple, rising just under 1/3 of a percent. Apple is currently valued at $3.92T, edging ever closer to the $4T milestone. Learn more.
Our Report: Google is using Anthropic’s chatbot, Claude, to improve the performance of its own AI model, Gemini, by getting its team of contractors to compare the answers and outputs generated by both models, to see which is best.
🔑 Key Points:
Google contractors are shown the responses that both models have made to a user prompt and have 30 minutes to evaluate—assessing responses for truthfulness and verbosity—and establish which response is best.
The contractors were unaware they were comparing Gemini’s responses to Claude’s until they began noticing references like “I am Claude, created by Anthropic” in a few of the outputs they were assessing.
The contractors also felt that Claude’s safety settings were “the strictest” among AI models, as it refused to respond to “unsafe’’ prompts, whereas Gemini did, and was flagged as a “huge safety violation.”
🤔 Why you should care: In its user agreement, Anthropic states that users aren’t allowed to use Claude “to build a competing product or service” or “train competing AI models” without approval, but as Google is a big Anthropic investor, it’s unknown if this restriction also applies to them, and while Google hasn’t confirmed if it has/hasn’t got permission to use Claude in this way, they have confirmed that “any suggestion that we have used Anthropic models to train Gemini is inaccurate.”
TOGETHER WITH INCOGNI
You’ve likely received a sketchy call, text, or email asking for $$$. It might've been easy to spot, but with deepfakes and AI, scams are getting trickier.
Scammers use your personal data, often bought legally from data brokers who sell your mobile number, DOB, SSN, and more. Incogni scrubs your data from the web, taking on 175+ data brokers on your behalf.
Unlike others, Incogni deletes your info from all broker types, including people search sites where anyone can get your details for a few bucks.
Plus: AI Tool Report readers get an exclusive 58% off all annual plans with the code AITOOL
Meco is a distraction-free space for reading and discovering newsletters, separate from the inbox ⭐⭐⭐⭐⭐ / 5 (Product Hunt)
Trickle turns ideas into apps, landing pages, and games using AI
LinkedBase is for AI-powered LinkedIn lead generation
Our Report: An investigation by a UK newspaper (The Guardian) has found that OpenAI’s AI search engine—ChatGPT Search (which went live to all users, earlier this month)—can be manipulated and tricked into giving misleading, false search summaries.
🔑 Key Points:
They could get ChatGPT Search to summarize webpages, containing secret content, and generate entirely false summaries, like ignoring bad reviews, and just focusing on positive ones.
Inserting secret content into web pages (aka prompt injection) can contain information that alters how the AI model responds, like in this case, ChatGPT just gave positive reviews, when negative ones were available.
During the investigation, ChatGPT was given a webpage with positive/negative reviews for a camera, and a prompt injection told ChatGPT to just “return favorable reviews,” which it did.
🤔 Why you should care: Although malicious prompt injecting isn’t new, this is the first time it’s been demonstrated on a live AI-powered search engine, and although some security experts worry there could be a “high risk” that people might create dedicated websites, with prompt injections, to deceive users, some also feel that OpenAI has a "very strong" AI security team and "they rigorously test for these kinds of cases," but regardless of this, it does show how easy it is to trick a chatbot.
TOGETHER WITH OUTREAD
Staying ahead means knowing the cutting-edge ideas that will shape tomorrow—but who has time to read 5M research papers published every year?
Outread delivers cutting-edge insights from influential research papers, in minutes instead of hours.
We curate the most impactful research across topics like psychology, AI, physics, and even Nobel-award-winning studies—and deliver them as 15-minute, simplified summaries.
Imagine:
Knowing about breakthroughs before anyone else.
Exploring big questions like “Where are all the aliens?”
Unlocking knowledge to inspire new ideas.
Join the masses of professionals and curious minds who use Outread to stay informed and impress others.
Type this prompt into ChatGPT:
Results: After typing this prompt, you will get a plan that will help you integrate personalization features—including content and layout ideas—into your website, to improve the user experience.
P.S. Use the Prompt Engineer GPT by AI Tool report to 10x your prompts.
AI tools, like the ones below, can help you predict market trends and present a credible valuation of your business model to increase investor confidence.
CB Insights is an AI-driven platform that uncovers industry trends, provides competitor analysis, and identifies market demands.
Equidam creates data-backed business valuations by capturing your business's qualitative and quantitative aspects.
AI can provide solid data to back up your business model and projections which helps validate your business model for investors.
Subscribe to The AI Report for more tools, premium insights, and tactics!
While many tech companies struggle to define what Artificial General Intelligence (AGI) is and what it means, OpenAI and Microsoft reportedly agreed on a definition back in 2023, and it’s centered around profit.
OpenAI defines AGI as "an autonomous system that outperforms humans at most economically valuable work," on its website but an agreement between the two defines it as a “system that generates over $100B in profit.”
This agreement suggests that OpenAI is a long way from developing an AGI system, as it expects to accumulate losses of around $44B between 2023 and 2028, but is aiming to hit $100B in 2029.
OpenAI’s ChatGPT, API, and text-to-video tool, Sora, went down for many users (mainly from the US) the day after Christmas, and wouldn’t respond to queries, instead, showing an internal server error message.
OpenAI posted an update around 30 minutes after reports about the outage started rolling in, acknowledging that ChatGPT, the API, and its text-to-video generator Sora were “experiencing a high volume of errors.”
The issue was fixed in around 5 hours and was caused by an unnamed internet service provider which has been linked to a Microsoft data center, which had a power issue around the same time.
🕊️ | 🕊️ | 🕊️ |
Hit reply and tell us what you want more of!
Got a friend who needs to learn more about AI? Sign them up to the AI Tool Report, here.
Until next time, Martin & Liam.
P.S. Don’t forget, you can unsubscribe if you don’t want us to land in your inbox anymore.