In Austin, Texas, Waymo's robotaxis passed school buses with flashing red lights and extended stop arms at least 19 times between September 2024 and January 2025—including moments when children were crossing in front of the vehicles. The company issued a federal recall in December acknowledging 12 incidents, deployed software fixes, and even accepted the school district's offer to collect training data in a parking lot. None of it worked. By mid-January, the incidents continued. [1]
This is not a story about a company failing to deploy a fix fast enough. It is a story about the fundamental architecture of how autonomous systems learn—and the specific ways that architecture breaks down when confronted with edge cases that don't fit the training data's underlying assumptions.
---
Dispatch
AUSTIN, TEXAS — December 2024 to January 2025
In early December 2024, Waymo's Emergency Response and Outreach manager, Rob Patrick, contacted the Austin Independent School District's police department with a proposal. The company wanted to collect data on the district's school buses to improve its vehicles' ability to recognize when they should stop. [1] What followed was a coordinated effort that reads like a textbook case of good-faith collaboration between a regulator and a technology company—and a textbook case of why that collaboration failed.
One of the purported advantages of self-driving car tech is that every car can learn from one vehicle's mistakes. Here's how Waymo puts it on its website: The Waymo Driver learns from the collective experiences gathered across our fleet, including previous hardware generations. But in Austin, Waymo's vehicles struggled for months to learn how to stop for school buses as drivers picked up and dropped off children. An official with the Austin Independent School District (AISD) alleged that the vehicles had, in at least 19 instances, illegally and dangerously passed the district's school buses while their red lights were flashing and their stop arms were extended rather than coming to complete stops, as the law requires. [1]


— WIRED, March 2025
By mid-December, the school district had assembled seven buses representing all models in its 550-vehicle fleet at the district's athletic complex. Waymo collected specifications on the buses' light configurations. By Wednesday afternoon, the company said it had the data it needed. [1]
It did not. On January 12, 2025—more than a month after the data collection event—a Waymo robotaxi passed a school bus that was stopped with active signals. The National Transportation Safety Board (NTSB) later found that a Waymo remote assistant, a human operator based in Michigan, had incorrectly told the autonomous system that the school bus ahead had no active signals. Six vehicles passed the stopped bus. [1]
In early December, Waymo even issued a federal recall related to the incidents, acknowledging at least 12 of them to federal regulators at the National Highway Traffic Safety Administration (NHTSA), which oversees road safety. According to federal filings, engineers with the self-driving vehicle company had developed software changes to address the behavior weeks before. But even after the recall, the school-bus-passing incidents continued, according to school officials and a report from the National Transportation Safety Board (NTSB), an independent federal safety watchdog that's also investigating the situation. [1]
— WIRED, March 2025
The school district's police department offered its own assessment in January: The data we collected from the beginning of the school year to the end of the semester shows that about 98 percent of people that receive one violation do not receive another. That tells us that the person is learning, but it does not appear the Waymo automated driver system is learning through its software updates, its recall, what have you, because we are still having violations. [1]
---
What's Really Happening
---


The Real Stakes
For Waymo: Regulatory containment and market expansion risk.
Confirmed: Waymo has already issued a federal recall and faces an NTSB investigation. [1] The company did not respond to WIRED's requests for comment. [1] What matters now is whether regulators will restrict Waymo's operations in school zones during pickup and drop-off hours—a move that would be precedent-setting and economically significant for a company betting on dense urban deployment.
Missy Cummings, the George Mason researcher, has already made the argument publicly: Waymo should not be allowed to operate around schools during school pickup and drop-off until they get this problem fixed and can demonstrate it with specific tests. [1] If NHTSA or state regulators adopt this position, Waymo loses operational flexibility in some of its highest-value markets (Austin, San Francisco, Los Angeles). The company's competitive advantage rests partly on unrestricted 24/7 operation; geofenced restrictions around schools undermine that claim.
For the autonomous-vehicle industry: Proof that "learning from experience" has hard limits.
Projected: The school-bus incidents will accelerate regulatory scrutiny of how autonomous-vehicle companies validate edge-case safety. The industry's marketing narrative—that AI systems improve continuously through real-world deployment—assumes that the system can perceive the problem in the first place. When it cannot, more data does not help. More miles do not help. This is a humbling recognition that some safety problems require different solutions: pre-deployment testing under controlled conditions, not post-deployment learning. [1]
One scenario: If other autonomous-vehicle operators (Cruise, Waymo competitors) encounter similar school-bus failures in their own fleets, regulators could move toward a blanket requirement that all autonomous vehicles undergo specific, standardized testing for school-zone safety before deployment in any district. This would slow the industry's rollout timeline significantly.
For school districts: Leverage, but limited.
Confirmed: The Austin Independent School District's lawyer warned that Austin ISD is evaluating all potential legal remedies at its disposal and intends to take whatever action is necessary to protect the safety of its students, if required. [1] The district has clear legal standing—Waymo's vehicles violated state law (Texas Transportation Code § 545.409 requires vehicles to stop for school buses with flashing red lights). The district could pursue damages, seek injunctions against Waymo operations in school zones, or both.
However, the district's actual leverage is limited. School districts cannot unilaterally ban autonomous vehicles from public roads. They can document incidents, report them to regulators, and pursue civil claims. What they cannot do is force Waymo to fix the problem faster than the company's engineering timeline permits.
---
Industry Context
The autonomous-vehicle industry has long marketed itself as solving safety through scale and learning. Every mile driven makes the system safer, the pitch goes. The Austin school-bus incidents expose the flaw in that narrative: if the system has a perceptual blind spot, more miles and more data do not eliminate the blind spot. They may even amplify it, because the system accumulates more false negatives (instances where it should have recognized a school-bus stop signal but did not) without any mechanism to correct them.
Waymo's response—issuing a recall, collecting data in a parking lot, updating software—followed the playbook that works for mechanical failures and clear sensor malfunctions. It does not work for perception gaps that sit at the intersection of computer vision, context recognition, and exception handling.
The company's remote-assistant system, which is supposed to intervene when the autonomous system is uncertain, failed on January 12. [1] This suggests that the problem is not just in the AI; it is in the human-machine interface. If human operators cannot reliably identify school-bus stop signals either—or if they are not given enough real-time information to make that judgment—the entire safety architecture breaks down.
---


Impact Radar
---
Watch For
1. NTSB investigation conclusion and recommendations (timeline: Q2–Q3 2025). The NTSB's preliminary report was published in early March 2025; the final report typically follows 12–18 months after the preliminary report. If the NTSB recommends operational restrictions on autonomous vehicles in school zones, NHTSA will face pressure to implement them. [1]
2. NHTSA enforcement action or guidance on school-zone testing (no public timeline established). Regulators could issue a notice requiring all autonomous-vehicle operators to demonstrate specific competency in school-zone scenarios before deployment. Watch for this in NHTSA's public docket or Federal Register notices.
3. Similar incidents reported by other autonomous-vehicle operators (ongoing). If Cruise, Tesla, or other companies report school-bus-passing incidents, the pattern becomes industry-wide, triggering faster regulatory response. Currently, only Waymo incidents are documented in public sources.
4. Waymo's response to operational restrictions (Q2 2025 onward). The company could accept geofenced school-zone restrictions, fight them in regulatory proceedings, or both. Watch for filings with NHTSA or state transportation departments.
---
Bottom Line
Waymo's school-bus failures are not a temporary operational glitch or a lag in software deployment. They reveal a hard boundary in how autonomous-vehicle systems learn: collective learning works only when the underlying perception problem is already solved. When a system cannot see the problem—flashing school-bus stop signals in real-world conditions—more data, more miles, and more remote-assistant interventions do not fix it. Regulators will likely respond by restricting autonomous-vehicle operations in school zones until companies can prove competency through controlled testing, not just real-world deployment. This will slow industry expansion and reset expectations about how quickly autonomous vehicles can operate in safety-critical contexts.
---