TMC is an independent, primarily volunteer organization that relies on ad revenue to cover its operating costs. Please consider whitelisting TMC on your ad blocker and becoming a Supporting Member. For more info: Support TMC

Neuro-symbolic AI & Autonomous Cars

Discussion in 'Autopilot & Autonomous/FSD' started by diplomat33, Jan 5, 2020.

  1. diplomat33

    diplomat33 Well-Known Member

    Joined:
    Aug 3, 2017
    Messages:
    6,622
    Location:
    Terre Haute, IN USA
    I stumbled upon this article called " Neuro-symbolic AI is the future of artificial intelligence. Here's how it works":
    https://www.digitaltrends.com/cool-tech/neuro-symbolic-ai-the-future/

    The idea is to combine neural nets with symbolic AI to create better AI.

    "The idea of neuro-symbolic A.I. is to bring together these approaches to combine both learning and logic. Neural networks will help make symbolic A.I. systems smarter by breaking the world into symbols, rather than relying on human programmers to do it for them. Meanwhile, symbolic A.I. algorithms will help incorporate common sense reasoning and domain knowledge into deep learning. The results could lead to significant advances in A.I. systems tackling complex tasks, relating to everything from self-driving cars to natural language processing. And all while requiring much less data for training."



    In the article, David Cox, the Director of the MIT-IBM Watson AI Lab and co-founder of Perceptive Automata, explains that deep neural nets alone are actually not good at solving edge cases for autonomous driving because they require a lot of annotated data. The problem is that edge cases are rare, so you probably won't have enough data of the edge case to properly train your NN for it. And if the NN is not properly trained, it will fail the edge case. He gives the example of the edge case of a traffic light on fire. It is a very rare event so you probably won't have the thousands of annotated images of traffic lights on fire, needed to train the NN. So, your car will not recognize a traffic light on fire as being a traffic light and will blow through the traffic light.

    I think this might explain some of the issues that Tesla has had with FSD and why vision only autonomous driving is so hard. If you get a large amount of data, statistically you will probably get too much of data on the common cases (more data than you need to train the NN) and not enough data on every single edge case you need to solve for. And even when you get a NN that works really well in most cases, chasing those edge cases is hard. You will have to identify every single edge case and then collect enough data, annotate the data, to then train the NN. That takes time.

    Cox is working on neuro-symbolic AI and believes that it will be better because it requires less data so it will be easier to catch those edge cases and it is also smarter.

    It sounds like neuro-symbolic AI could help with autonomous driving, not just with solving edge cases but also in making the car smarter in how it responds to cases.
     

Share This Page

  • About Us

    Formed in 2006, Tesla Motors Club (TMC) was the first independent online Tesla community. Today it remains the largest and most dynamic community of Tesla enthusiasts. Learn more.
  • Do you value your experience at TMC? Consider becoming a Supporting Member of Tesla Motors Club. As a thank you for your contribution, you'll get nearly no ads in the Community and Groups sections. Additional perks are available depending on the level of contribution. Please visit the Account Upgrades page for more details.


    SUPPORT TMC