top of page
Search

Full Self-Driving Is Foreshadowing the Ethical Challenges of the AI Age

  • Writer: Justin Goeglein
    Justin Goeglein
  • 5 days ago
  • 3 min read

A recent Atlantic article1 on Tesla's Full Self-Driving program stopped me mid-scroll. Not because it was surprising, but because it was honest, and I found it highly relatable, almost too close to home. I also worked as a director at an autonomous startup, May Mobility, drive an FSD Tesla on occasion, and could see the same sort of accident happening to me.

It named the thing that engineers and product leaders working in advanced technology already know but rarely say out loud: we don't yet have a reliable way to determine when autonomous software is actually good enough to replace human attention and execution.

That's the core problem. And it's harder than it sounds.

It made me think not only about my own behavior, but also about the implications of where we are headed across all aspects of engineering and product safety.

Justin in his Tesla
Justin showing off the hands free driving of his Tesla Model 3

The technology makes us think it's working Tesla's branding isn't accidental. "Full Self-Driving" is a confidence signal, not a technical specification. And it works, both consciously and unconsciously. Drivers become complacent because the system performs well enough, often enough. But there is a non-zero risk baked into every mile driven, and we have no transparent way to measure it. Worse, drivers have almost no visibility into when that risk increases, whether it's poor lighting, edge-case road geometry, or degraded sensor performance. The system doesn't tell you when it's struggling.

Distracted driving is already killing people Cell phones made it measurably worse. Conditioning drivers to pay even less attention through a product marketed as fully autonomous will make it worse again. This isn't speculation. It's a predictable outcome from human factors research.

There's another layer that gets less attention: the transition problem. When I switch from a Tesla to a Rivian to a Kia, I don't automatically recalibrate my trust. Subconsciously, I treat "autonomous driving" as a category, not a spectrum. But these systems are not equivalent. Rivian < Kia < Tesla in terms of drive-assist capability, and the driver's required level of attention and awareness needs to shift accordingly. Right now, most drivers have no framework for making that adjustment, and no one is building one for them.

Leaders have a specific obligation here Efficiency and technology are not in conflict with safety, but efficiency cannot be used to justify decisions that quietly erode it. Going slow and thinking critically about how we live, how we drive, and how we work is not timidity. It's engineering discipline.

And here is the accountability reality that I don't think gets said clearly enough: I am liable for an accident no matter how much confidence I have in my Tesla's ability to drive itself. The driver is responsible. The engineer is responsible. Technology does not transfer that responsibility.

So back to the fundamental question the Guardian piece raised: Is it acceptable to stop paying full attention to driving if my car is proven to perform better than 95% of human drivers? What about 98%? I don't have a firm answer. But my follow-up question is this: how will Tesla prove that to me in a way that is transparent, audited, and standardized? Right now, they can't. And until they can, the automotive industry needs increased transparency, traceable guardrails, independent regulation, and common standards for measuring the real-world effectiveness of autonomous functions.

Human and AI Trust
The illustration portrays the connection between humanity and technology, symbolized by a human hand reaching towards a depicted AI hand

These ethics don't stop at the car door The same questions that haunt FSD are heading fast toward every industry where AI is being layered onto work that has real safety consequences. Engineering is one of them. At SwitchBox, we build embedded controls, ADAS evaluation systems, and EV integration stacks for customers working at the cutting edge of vehicle technology. The work our engineers do can directly affect the safety of end products and the people who use them.

AI must be used in these contexts, but it must be used sparingly, under heavy scrutiny, with structured approval gates, and with expert human oversight at every critical decision point. The ability of AI to make decisions automatically, without that oversight, is exactly the failure mode we're already watching play out on public roads. The opportunity for human work to be automated is growing faster than our frameworks for governing it responsibly.




 
 
 

Comments


bottom of page