Loading stock data...

This week in artificial intelligence Apple refuses to disclose its machine learning processes

WWDC 2024 Apple Intelligence iPhone Limits

Here is a rewritten version of the article with some minor changes and improvements for clarity:

This Week in AI

Kyle Wiggers, Senior Reporter, Enterprise

As we continue to navigate the rapidly evolving world of artificial intelligence, this week’s edition of This Week in AI brings you the latest news and updates on the industry.

Apple Unveils Apple Intelligence

Apple has made a significant move into the AI space with the announcement of Apple Intelligence, a suite of AI-powered tools designed to enhance user experience. The new platform promises to bring more personalized experiences to users, leveraging machine learning algorithms to improve functionality across various apps and services.

According to Apple, the on-device model contains 3 billion parameters, making it comparable to Google’s Gemini Nano model. However, the server model is larger and more capable, with Apple claiming that it "compares favorably" to OpenAI’s GPT-3.5 Turbo.

GPT-1 Turns Six

As we mark the sixth anniversary of the release of GPT-1, the progenitor of OpenAI’s latest flagship generative AI model, we take a moment to reflect on how far the field has come. GPT-1 was groundbreaking in its approach to training, using unlabeled data to "learn" how to perform a range of tasks.

While experts believe that we won’t see another paradigm shift as meaningful as GPT-1’s anytime soon, it’s incredible to think about how far deep learning has come in just six years. The training time for GPT-1 was 34 days, compared to GPT-3, which took a month to train on a dataset of 4.5 gigabytes.

Grab Bag

In other news, we mark the sixth anniversary of the release of GPT-1, and while deep learning might be hitting a wall, it’s incredible how far the field’s come. Consider that it took a month to train GPT-1 on a dataset of 4.5 gigabytes of text (the BookCorpus, containing ~7,000 unpublished fiction books). GPT-3, which is nearly 1,500x the size of GPT-1 by parameter count and significantly more sophisticated in the prose that it can generate and analyze, took 34 days to train.

Topic Highlights

  • AI
  • Newsletter
  • This Week in AI

Related Stories

  • Roborock’s Roomba competitor gets a robot arm
  • AMD’s CES 2025 press conference: How to watch
  • ChatGPT: Everything you need to know about the AI-powered chatbot

Subscribe

Stay up-to-date on the latest news and updates in the world of artificial intelligence by subscribing to our newsletter.