we've all seen the flood of stories in the financial media about Tesla Autopilot. so much of the coverage is misleading, aggressive and speculative in nature. as investors, that mass of noise can seem like an outrageous attack. let's not let this media swirl about the company we're invested in blind us to fair and important feedback that can save lives. here are a couple of observations I had today on what I see as legitimate issues for Tesla to look at, in Consumer Reports coverage today they criticized the use of the name autopilot (as others have in the past few weeks). up until this morning such criticism always irritated me. from day one, Elon Musk was very clear that he choose the name "autopilot" because it is common knowledge that the commercial planes we fly always have a human being in command of the cockpit whether autopilot is activated or not. now I think Tesla does have something it can learn here. I still think "autopilot" was a very good name for this driving assistance tool. almost everyone knows pilots remain responsible while "autopilot" is in use on an aircraft. what's become apparent now, is the huge difference between the screening process and training necessary to be licensed to pilot an aircraft vs. a car. the assumption that the general public will be as attentive as a pilot for Delta or United to the fact that "autopilot" is an assist tool and not a replacement for a driver in command is almost certainly a false assumption. I've never heard of a YouTube video of a Delta pilot laughing about how no one is in the cockpit, but we've seen this sort of thing with people driving there Teslas. it's obviously a very small percent of Tesla owners who would so blatantly and purposely run rough shod over the requirement of a human in charge, but a much larger number of us can be lulled into increasing passive confidence that the car has things under control. second observation, the accident in Florida highlights two extremely rare limitations of the system that consumers could easily have falsely assumed they were protected from. first, despite the highest crash test scores of any vehicle, there are some outlier circumstances that find a weak spot in Tesla's otherwise tank like protection of the driver. as safe as Tesla's are, they are not a guarantee of surviving an accident. second, Tesla's autopilot is known to give off a strong alert if the system detects a situation over which the driver needs to take control. what's not so clear is that there are situations where the system is unable to detect that it is facing a situation it cannot handle. while this is extremely rare on a divided highway, people operating autopilot need to know that it's possible a situation needing there attention can occur without the car alerting them to it. of course, Tesla has already told consumers that they need to be alert at all times. however, Tesla could be explicit that it might be necessary for the driver to notice autopilot is in over its head rather than assume the system will tell you so itself (to be fair to Tesla, I think their restriction to divided highways was designed to cover this issue without dramatically under representing AP's usefulness, but obviously, we now have an example of where this restriction was not enough). last observation, "beta" system. I think a mea culpa seems in order with this one. I give Tesla the benefit of the doubt that to them "beta" has always meant the reasonable definition they shared in the past week, and how do we ever get to a billion miles without the cars being in use. however, it was not the best choice to use this word "beta" without being explicit about their definition at the time they first used the term. from the first time I heard Tesla use that word, it sounded off too me. didn't mean this to be such a long post... curious to hear what you all think.