• The Drip AI
  • Posts
  • 🔎 Find Out Where and How AI Gets its Info

🔎 Find Out Where and How AI Gets its Info

PLUS: Top AI Presentation Tools

AI is great, but how do these powerful systems get their information? How can we trust that the output is reliable? Is AI biased?

If you’ve every had any of these questions, then you are in the right place. Join us this week as we dive deeper into where LLMs get their information and if we can actually trust it.

What to Expect:

  • Where Do LLMs Get Their Information?

  • How To Find Best AI Presentation Maker

  • OpenAI’s CoFounder Launches New AI Safety Venture

  • Level Up Your Fitness Journey

  • More AI Tech, Tools, and Talks

CONCEPT CORNER
Where Do LLMs Get Their Information?

Source: Made by The Drip Team via DALL-E

One of the biggest questions facing us today is: Where does AI get its information?

Short answer: LLMs gets its information from its training data.

Long answer: LLMs are trained on vast datasets pulled from the internet, including websites, books, articles, forums, and other publicly available text sources These models are not directly connected to live data or the internet; instead, they learn from the data they were trained on, which usually covers content up to a certain cut-off date (e.g., GPT-4’s training data stops at September 2021, later updated with new sources until September 2023).

The selection of training data for AI models is primarily decided by the AI’s developers. So it is important that these teams are following ethical guidelines and removing any data that could be biased or harmful content.

👉️ Remember: AI is not reasoning or forming opinions on its own. It is using statistical patterns to predict and generate responses based on this training data. Think of it like teaching a student by giving them a massive library of information and asking them to synthesize answers only based on what they’ve read, rather than continuously checking current news or facts.

❗️ Bottom Line: Even OpenAI CTO doesn’t know the exact data came from… hard to believe right?

So if we don’t know exactly where the data is coming from, how can we trust LLMs aren’t biased?

Bias is an inherent challenge in any AI system because it reflects the data it was trained on. LLMs don’t have intentions or agendas, but they can inherit biases present in the data. Meaning… if the training data includes biased viewpoints, the model might reflect that bias in its responses.

Several safeguards are in place to minimize these issues:

  1. Diverse Training Data: Efforts are made to include a broad range of sources to balance perspectives.

  2. Bias Mitigation Techniques: Researchers use methods to detect and reduce biases, such as refining how the model handles sensitive topics (we all remember Google overdoing it a bit..)

  3. Human Feedback: Continuous fine-tuning through feedback helps align the models with human values and expectations, although this is an ongoing process.

How Can We Trust The Information?

Unlike search engines that provide links to specific sources, LLMs don’t directly cite or provide references in their responses. They generate text based on a blend of all the content they’ve learned from, making it challenging to pinpoint exact sources.

To assess the reliability of LLM responses:

  1. Cross-Check Information: Use LLMs as a starting point, not the final word. Always verify critical facts with trusted, current sources!!

  2. Look for Hallmarks of Trustworthy Information: Reliable responses usually align with known facts, come with logical reasoning, and avoid making definitive statements on uncertain topics.

⚠️ WARNING : LLMs can inadvertently generate misinformation if the data they were trained on contained errors. This risk is why some experts advocate for models that can better track and cite their sources or have real-time access to updated databases. Current research focuses on developing systems that can transparently show the origins of the information used, but this capability is still evolving.

So, What Can You Do?

  • Ask for Sources: When using LLMs, you can often ask, “What are some sources that support this?” or “Can you explain where this information might come from?” This won’t give direct citations but can help you understand the general context.

  • Stay Skeptical: Treat LLM responses as educated guesses, especially when dealing with critical decisions. Cross-referencing with up-to-date sources remains essential.

  • Follow Updates: AI companies frequently release updates on model improvements, including steps taken to improve accuracy and reduce bias. Staying informed on these developments can help you better understand and trust the tools you’re using.

But don’t worry - for those of you who want to know where your information comes from, you can always use Perplexity AI. Perplexity’s model will give you a list of all the sources it used to synthesize its answer, giving you full visibility into where your information is coming from. Let us know in the survey below if you want to learn more about Perplexity.

HOW TO: HACKS
Finding the Best AI Presentation Maker

AI presentation makers can save you time and elevate your slides, but not all tools are created equal. Whether you're looking for quick, simple slides or detailed, professional presentations, finding the right tool depends on your specific needs.

Knowing what you want to ultimately do will help you pick the best tool. We’ve reviewed the top tools so you don’t have to 👇️ 

AI TOOL

PROS

CONS

BEST FOR

PRICING

Beautiful.AI

- Easy drag-and-drop features

- Sleek, professional design suggestions

- Less flexibility for highly personalized slides

Sales pitches, marketing reports, and pitch decks

Basic plan starts at $12/month

Canva

- User-friendly interface for all levels

- Large library of templates, images, and animations

- Lacks advanced AI capabilities

Creative presentations (for educators, social media content, and other creatives)

Free with optional Pro subscription at $12.99/month

Magic Slides 

- Quick slide creation with simple suggestions

- Easy integration with Google Slides

- Very basic design and limited customization options

Fast, straightforward presentations without complex elements

Free with optional Pro subscription at $12/month

PlusAI

- Highly customizable

- Easy Google Slides add-on

- May have a learning curve for beginners

- Requires subscription for full access

Business, educational, and formal presentations

Free 7 day trial, then $10/month


Other Tools to Check Out:

  • Gamma for Best Visuals and Narrative Decks

  • Decktopus for Best Slide Design

  • Tome for Best Business Decks

⭐️ Our Recommendation: Plus AI

❓️ Why:

- Works directly in Google Slides and PowerPoint

- Converts any PDFs/Documents into PowerPoint

- Edits Slide Content with AI-Powered Improvements

- Use a Prompt to Generate PowerPoint from Scratch (How to write a good prompt)

Check out this guide for more info or watch the video below.

Source: Made by The Drip Team via DALL-E

  • 💼 What happened: Ilya Sutskever, one OpenAI’s cofounders, is back with a new startup: Safe Superintelligence (SSI). After a dramatic exit from OpenAI—thanks to a messy fallout involving CEO Sam Altman’s brief ousting—Sutskever needed a fresh start. His old “Superalignment” team at OpenAI, which was all about keeping AI on humanity’s side, was dismantled. So, what’s an AI legend to do? Start a new, safety-focused venture, of course.

  • 🤖 The goal: SSI aims to build super-smart AI that is safe for humans and won’t accidentally (or intentionally) ruin our day or go vogue. With $1 billion from venture heavyweights like Andreessen Horowitz and Sequoia Capital, SSI is now valued at $5 billion after only being founded three months ago.

  • ❓️ What’s different: SSI is rewriting the AI playbook by building a tight-knit crew that values character over credentials and safety over profit. They plan to spend years on safety-first research before any products hit the market, in an effort to make sure AI doesn’t go all Terminator on us.

  • 🖼️ The bigger picture: With AI evolving rapidly, concerns are growing about its potential misuse. Tech leaders like Sam Altman and Elon Musk are sounding the alarm on risks like AI-driven misinformation and cybersecurity threats. The takeaway? We need strong oversight and smart safety research to ensure AI doesn’t turn into a Pandora’s box of problems.

PROMPT OF THE WEEK
For: Fitness Newbies


I’m committed to kickstarting a fitness journey, but I need a realistic plan that I can stick to without feeling overwhelmed. I’m not a gym pro, so let’s keep it simple but effective.

Here’s my current fitness level and daily routine:
[Share details about your activity level, typical day, any physical limitations, and fitness goals.]

Based on this, help me create:

- A beginner-friendly 4-week workout plan that I can do at home or at the gym, including quick workouts that fit into a busy schedule.
- Tips for maintaining motivation and how to keep workouts fun (think music, apps, or workout buddies).
- Strategies for overcoming common hurdles, like missing a workout or feeling unmotivated.

Design a plan that’s as sustainable as it is motivational, so I can build a healthy habit that lasts!

Additional Tips for an Ideal Response:

  • Be Specific About Your Goals: Clearly state your goals (e.g., lose 5 kgs, run a 5K, gain muscle tone).

  • Mention Any Physical Limitations or Preferences: If you have injuries, don’t like certain exercises, or prefer indoor workouts, include that information.

  • Outline Potential Barriers: Identify possible challenges, like time constraints, lack of space, or motivation issues.

If you’re looking for nutrition tips or personalized meal plans, check out last week’s prompt here.

TECH TOOLS, TIPS, AND TALKS

📖 What we’re reading: The Hard Thing About Hard Things - Navigating tough business challenges with practical advice and insights.
📻 What we’re listening to: The AI Podcast - Accelerating the biopharmaceutical industry with AI.
💻 What we’re using:  Plus AI - Supercharge your presentation.

MORE READING

We want to empower our readers with actually insightful knowledge so that they are more confident, informed leaders. Because let's face it, AI could be running the world pretty soon... so shouldn't we at least know how it works? If you are curious about a topic and want to learn more, drop us a message below👇🏼