It's certainly not trivial, but since lidar is only needed for perception and wouldn't necessarily factor into the rest of the driving logic, and they're already performing sensor fusion (cameras + radar + gps + odometry), they could add it as another input that would strengthen existing predictions for distance/orientation of objects. Hardware would at minimum require redesigning various structures of the car to add sensors (wiring, mounting points, integrating seamlessly into body panels, etc) as well as retooling production processes to support them, not to mention offsetting increased power consumption and whatever maintenance costs might end up being.
But I only say all this as someone more familiar with development of software rather than hardware and I'm likely underestimating the effort of sensor fusion with existing NNs trained on vision alone