OpenPilot has tot layers of safety, some c++ code running on the actual panda (interrace to CAN) and also the Python decision code to translate models predicted turns to commands to send to CAN. The model takes a confidence minimum before returning a scenario to steer for also, but that was accept 0% confidence if you want to see how far your wheel can turn at 1 MPH (hint: super human).
Those things said the steering rack also has some safety code Tesla sets, including to disallow steer commands of humans steer (which causes Autopilot disengagements).
All these things add up to a lot of safety, but YOU are responsible for safety on all current systems, they are driving aids, not full self driving.
I just did Indiana to DC and it drove the large majority, OpenPilot now doesn't nag any but it watched your eyes are on the road instead. It worked beautifully.. although little to be gas/brake, only steer (I and another guy successfully forged cruise control speed up / slow down) controls but that isn't really enough. We will have a gas pedal probably in a week or so for early testing, then we will have full region but still not break since our early cars do not have computer controlled brakes.
It's VERY nice on interstate drive the, VERY..