Data Insights

Bite-sized insights on how the world is changing, published every few days.

Artificial Intelligence

Californians now travel millions of miles each month in driverless taxis

A bar graph illustrates the growth of robotaxi usage in California over a two-year period, highlighting monthly passenger miles in paid driverless taxis. The y-axis ranges from 0 to 4 million miles, with annotations at each million-mile mark. The x-axis covers a timeline from August 2023 to May 2025. The bars increase steadily, showing an upward trend, particularly sharp growth after April 2024, reaching close to 4 million miles by May 2025. The title states that robotaxi usage has grown eightfold in just a year. 

Data sources are listed as the California Public Utilities Commission (2025). The image is licensed under CC BY.

After only two years, California’s driverless taxis now transport passengers for more than four million miles per month. Although they still make up only a fraction of taxi trips in the state, they are expanding quickly.

This chart shows the monthly distance traveled in driverless trips in California. It measures the total number of passenger-miles, summing up the distance traveled by all passengers.

In August 2023, California regulators fully approved self-driving taxi services in San Francisco for companies Cruise and Waymo. However, Cruise stopped operating in late 2023 due to safety and regulatory issues, so the recent growth reflects only Waymo’s service.

Trips stayed under half a million miles per month until mid-2024. But since then, growth has taken off. Within a year, usage multiplied eightfold, climbing past four million miles by May 2025, the latest data available.

This is a new chart on Our World in Data — we will update it every quarter based on the latest reports

The length of software tasks AI systems can do on their own has been increasing quickly

A chart illustrates the improvement of AI systems in performing longer software tasks over time. The horizontal axis spans from 2019 to mid 2025, marking the development of various AI models, such as GPT-2, GPT-3, GPT-3.5, and several iterations of GPT-4. The vertical axis indicates the length of time in minutes that tasks  take human professionals. Key points highlighted include:

- "GPT 3.5 (which came out in Spring 2022) could only do tasks that take humans a few seconds, such as selecting the right file"
- "OpenAI's o3(which came out in April 2025) can do tasks on its own that take humans 20 minutes," such as finding and fixing small bugs in code
- The observed trend shows a rapid progression in AI capability

Accompanying notes indicate that the data is based on 170 tasks across fields like software engineering and machine learning. The source for this data is the Model Evaluation & Threat Research (METR) from 2025, presented under a Creative Commons attribution license.

How will artificial intelligence (AI) impact people’s jobs?

This question has no simple answer, but the more AI systems can independently carry out long, job-like tasks, the greater their impact will likely be.

The chart shows a trend in this direction for software-related tasks. The length of tasks — in terms of how long they take human professionals — that AIs can do on their own has increased quickly in the past couple of years.

Before 2023, even the best AI systems could only perform tasks that take people around 10 seconds, such as selecting the right file.

Today, the best AIs can fairly reliably (with an 80% success rate) do tasks that take people 20 minutes or more, such as finding and fixing bugs in code or configuring common software packages.

It’s unclear how much these results generalize; other factors, like reliability, need to be considered.

But AI capabilities continue to improve, and if developments keep pace for the next few years, we could see systems capable of performing tasks that take people days or even longer.

Read more about how we can help make our future with AI go well

Since 2010, the training computation of notable AI systems has doubled every six months

A chart showing the computation used to train notable AI systems, measured in total floating-point operations (FLOP) and highlighting two distinct eras. In the first era from 1950 to 2010, the training computation doubled approximately every 21 months. With the rise of deep learning since 2010, it has been doubling approximately every 6 months. The y-axis ranges from 100 FLOP to 100 septillion FLOP. Several systems are highlighted, from early systems such as Theseus and the Perceptron Mark 1 to recent systems such as GPT-4 and Gemini 1.0 Ultra.

Artificial intelligence has advanced rapidly over the past 15 years, fueled by the success of deep learning.

A key reason for the success of deep learning systems has been their ability to keep improving with a staggering increase in the inputs used to train them — especially computation.

Before deep learning took off around 2010, the amount of computation used to train notable AI systems doubled about every 21 months. But, as you can see in the chart, this has accelerated significantly with the rise of deep learning, now doubling roughly every six months.

As one example of this pace, compared to AlexNet, the system that represented a breakthrough in computer vision in 2012, Google’s system “Gemini 1.0 Ultra” just 11 years later used 100 million times more training computation.

To put this in perspective, training Gemini 1.0 required roughly the same amount of computation as 50,000 high-end graphics cards working nonstop for an entire year.

Read more about how scaling up inputs has made AI more capable in our new article by Veronika Samborska

Investment in generative AI has surged recently

Investment in generative AI has surged recently

Generative AI is a type of artificial intelligence that can create various media, including text, images, and music. It learns from existing data to generate novel outputs. Examples include language models like GPT-4 and Claude, which can write essays or answer questions, and image generation models like Midjourney and DALL·E, which can create artwork based on textual descriptions.

In 2023, funding for generative AI soared to $22.4 billion, nearly nine times more than in 2022 and about 25 times the amount from 2019. This surge occurred despite overall investment in AI declining since its 2021 peak.

The data is produced by Quid and made available through the AI Index Report. Quid analyzes investment data from over 8 million companies, using natural language processing to uncover patterns and insights from vast datasets. We have recently updated our charts on Our World in Data with the report's latest edition.

Read more on how investment in AI has been changing over time here →

Language-based AI systems have grown rapidly in recent years

The rapid growth of language-based AI systems

In recent years, there has been a notable shift towards artificial intelligence (AI) systems focused on language. They have outpaced advancements in other sectors like image recognition, gaming, and biology.

This is shown in the chart, which shows the number of AI systems considered notable by Epoch AI researchers.

The shift is primarily due to technical advancements in AI algorithms, particularly the introduction of “transformers” around 2017. As shown in the chart, the rapid development of language-based AI systems began around this time.

Transformers have radically changed natural language processing by evaluating chunks of text — “tokens” — instead of focusing on one word at a time. For example, by considering the whole sentence "The bank can ensure your money is safe", transformers can quickly discern that "bank" refers to a financial institution, not the side of a river.

This capability has significantly enhanced AI's ability in complex language tasks, improving machine translation and text generation, and making interactions more intuitive and effective.

Explore this data