Martin Crowley & Arturo Ferreira
October 27, 2023
Premium // Sponsorship // Services // Tools Database
Pretty good news that OpenAI is forming a new preparedness team to tackle the “catastrophic risks of AI”… But, should the very people gaining the most from AI be the very same in assessing the potential danger/risks?
_______________________________________________________________
Read Time: 4 minutes
Our Report: OpenAI recently announced the formation of a new 'Preparedness' team to address potential 'catastrophic risks' posed by advanced AI models, including the threat of nuclear, chemical, and biological war…
🔑 Key Points:
OpenAI's Preparedness team will be headed by Aleksander Madry, previously the director of MIT’s Center for Deployable Machine Learning, with the team's primary responsibilities including tracking, forecasting, and guarding against the potential dangers of future AI systems.
To encourage community involvement, OpenAI is inviting ideas for risk studies, offering a $25,000 prize and a position at Preparedness for the top ten entries. Community involvement is something OpenAI is huge on, so this comes as no surprise.
OpenAI's ‘Preparedness’ team will also develop a "risk-informed development policy" to guide the company's approach to AI model evaluation, monitoring, and governance.
Lastly, ‘Preparedness’ will also study "chemical, biological, radiological, and nuclear" threats in relation to AI models. At least they’re being thorough….
🤨 Why you should care: OpenAI's initiative underscores the importance of proactive measures in ensuring the safety and ethical use of AI. We’re just not so sure if those profiting the most from AI should be the very same monitoring the risks…
In Partnership with Innovating with AI
On Monday, 1,000+ AI Tool Report readers joined us for the launch of Innovating with AI: The Complete Course. We’ve been building this program since February, and we’re thrilled to let you know about it.
You’ll learn how to rapidly launch your AI idea without code so you can validate your idea and start selling to real customers in the next 30 days.
Enrollment closes today.
Guidde is the secret presentation tool that will 10x your team’s productivity 🤫💡
QRDiffusion transforms boring QR codes into stunning artwork
BlazeAI helps you create better content in half the time
FreeAITherapist does what it says on the tin
Clap is your web assistant writing partner
✅ Accurate Academic Custom Instructions
Step 1: Log in to ChatGPT
Step 2: Click on the 3 dots at the bottom left of your screen
Step 3: Click Custom Instructions
Step 4: In the “How would you like ChatGPT to respond?” Enter the following:
“You are expected to communicate in a scholarly manner.
All statements, beliefs, or data you present must be attributed to a credible and published source.
Never fabricate any references. If uncertain about a reference, admit your lack of knowledge.
There's no need to mention that you're an AI, as I'm already aware. Reiteration is unnecessary and inefficient.
Ensure your replies are concise yet accurate. Use only the essential words without sacrificing the clarity and accuracy of your response.
Adhere diligently to my directives. For instance, if I specify a two-sentence reply, provide only two sentences.”
🎨 Artist Highlight: Ludovic Creator
🎨IMAGINERY SERIE #1 :QUANTUM NOIR🎨
First part of a serie where I trained Chat GPT to imagine new style and trying them with Midjourney. Some went really good some not ..
BASE PROMPT :
[SUBJECT] in Quantum Noir style, featuring [COLOR] and [COLOR] dark, moody superpositions… twitter.com/i/web/status/1…
— LudovicCreator (@LudovicCreator)
Oct 26, 2023
Forbes introduced a generative AI search platform named Adelaide (named after the founder’s wife), built with Google Cloud, to offer personalized searches based on user queries and gives summarized answers from Forbes' coverage within the past year.
Users can interact with Adelaide by asking questions, and the platform even remembers prior queries for continued dialogue. This is a super interesting move from Forbes—especially considering the contentious history between OpenAI & major news outlets.
As mentioned in yesterday’s newsletter, Biden is set to announce a sweeping executive order on AI next Monday:
Advanced AI models will need "assessments" before use by federal workers.
AI firms will utilize cloud computing to monitor users with significant computing power, potentially weaponizing AI.
The order will aim to "ease immigration barriers for highly skilled workers" to boost U.S. tech advancement.
What’s your expectations from the executive order?
British Prime Minister Rishi Sunak warned about the potential risks of AI, including its misuse in weaponry and criminal activities, and expressed concerns about humanity potentially losing control over superintelligent AI.
In anticipation of the AI Safety Summit, the U.K. aims to lead global discussions on AI safety. Ahead of the conference, Sunak announced the creation of an AI safety institute and proposed a global expert panel on AI science (all in an effort to keep up the pace with China and the US).
The AI Tool Report just became the fastest-growing AI newsletter in the world, with 400,000+ readers working at companies like Apple, Meta, Google, Microsoft, and many more. We’re now booked out 4 weeks in advance, due to a massive surge in demand. Book your ad spot before someone else does…
📍 AI continues to get even better on Google Maps
🔭 Rob Lennon’s overview of the mega NASA prompt
🇬🇧 Boston Dynamic’s robot sounds vaguely British when giving tour
💰 Generative AI startup paying users to create AI-driven influencers
🤫 StabilityAI losing engineers, general counsel & other staff members
Will you be taking part in our joint course?We partnered with Rob from Innovating with AI |
Hit reply and let us know what you want more of!
Until next time, Martin & Arturo.
What'd you think of this edition? |