• RUDI
  • Posts
  • DeepSeek vs. ChatGPT Part 1: AI Showdown in the ‘Manhattan Project 2.0’ Era

DeepSeek vs. ChatGPT Part 1: AI Showdown in the ‘Manhattan Project 2.0’ Era

DeepSeek vs. ChatGPT: A Comparative Analysis in the "Manhattan Project 2.0" Era

Welcome to this exploration of two advanced AI systems: DeepSeek and ChatGPT. In this guide, I’ll walk you through a comparative demonstration of how these models perform across various tasks, highlighting their strengths, weaknesses, and implications. To enhance your understanding, I’ve included hyperlinks to foundational concepts and terms you might want to explore further.

Introduction

Right now, we are in what I like to call the "Manhattan Project 2.0" of artificial intelligence. Just as the original Manhattan Project focused on developing nuclear weapons, this modern era revolves around advancements in AI, a technology with transformative potential. If you're unfamiliar with the Manhattan Project, it’s worth revisiting its historical significance to understand the scale of today’s AI developments.

Recently, a Chinese AI company introduced DeepSeek, a cutting-edge AI system with R1 and V3 models. The R1 model is comparable to OpenAI’s o1 Advanced Reasoning capabilities. However, DeepSeek’s implications extend far beyond technical performance, raising concerns about data security, surveillance, and global AI competition. For context, think about how TikTok sparked debates over privacy and national security—DeepSeek amplifies these concerns to an entirely new level. If you're curious about TikTok’s controversy, you can learn more about it here.

Cost Implications of AI Models

One critical factor to consider is the cost of using these systems. Both DeepSeek and ChatGPT rely on tokens, which represent units of text used in processing and generating responses. Here’s a quick breakdown of token usage:

• A novel (~50,000 words) is approximately 75,000 tokens.

• A research paper (~1,500 words) uses about 2,250 tokens.

DeepSeek is significantly less expensive than ChatGPT, with costs like $2.74 per 1 million tokens compared to ChatGPT’s $75.00 per 1 million tokens. This makes DeepSeek about 27 times cheaper than ChatGPT for comparable tasks. For example:

• Processing a novel (~75,000 tokens) costs approximately:

DeepSeek: $0.2055

ChatGPT: $5.625

• Processing a research paper (~2,250 tokens) costs approximately:

DeepSeek: $0.006165

ChatGPT: $0.16875

This stark cost difference highlights the affordability and scalability of DeepSeek, which has disrupted the AI market and raised questions about the necessity of high-cost AI infrastructure.

This cost disparity is one of the reasons the U.S. government has a $500 billion AI infrastructure initiative—to remain competitive in the global AI landscape.

For a deeper understanding of tokens and their usage, explore OpenAI’s token documentation.

Testing Basic Context Handling

Prompt 1: What is Photosynthesis?

Prompt 2: How does photosynthesis contribute to the oxygen cycle?

Advanced Reasoning and Contextual Memory

Both models are designed to remember previous prompts within a session, allowing for contextual continuity. For example:

Prompt 3: Explain how the oxygen cycle impacts human life and ecosystems.

Learning Recommendations:

  1. Large Language Models (LLMs): To grasp the fundamentals of how these AI systems work, start with OpenAI’s guide to LLMs.

  2. AI Ethics and Policy: Explore resources on AI’s societal impact, such as the AI Now Institute.

  3. Manhattan Project 2.0: For insights into the geopolitical AI race, read about AI investments.

Feedback and Participation

I’d love to hear your thoughts on this demonstration. What stood out to you? How do you see these tools fitting into your work or studies? Let me know your feedback and stay tuned for Part 2, where we dive into research synthesis!

Hoff

xx

Reply

or to participate.