TMC is an independent, primarily volunteer organization that relies on ad revenue to cover its operating costs. Please consider whitelisting TMC on your ad blocker or making a Paypal contribution here: paypal.me/SupportTMC

Game AI approach to help autonomous cars to avoid incidents

Discussion in 'Autonomous Vehicles' started by kurdakov, Sep 27, 2016.

  1. kurdakov

    kurdakov Member

    Joined:
    Jul 25, 2016
    Messages:
    30
    Location:
    Moscow Russia
    Found and decided to share one interesting approach to increase AI effectiveness which is also potentially applicable to autonomous cars

    Next Big Future: Artificial Intelligence Agent outplays human and the in Game AI in Doom Video Game
    ( the paper is here [1609.05521] Playing FPS Games with Deep Reinforcement Learning )

    on autonomous cars authors say:

    "the deep reinforcement learning techniques they used to teach their AI agent to play a virtual game might someday help self-driving cars operate safely on real-world streets and train robots to do a wide variety of tasks to help people"

    indeed, this approach learns only from 'what it observes' in 3D and still outperforms human players. So
    if there are several trained networks (as proposed ) which allow both navigation and taking actions using neural network memory, and those networks make better decisions than human driver can do, then safety of autonomous cars might increase and become much better than we could expect even from experienced driver.
     
  2. kurdakov

    kurdakov Member

    Joined:
    Jul 25, 2016
    Messages:
    30
    Location:
    Moscow Russia
    #2 kurdakov, Sep 27, 2016
    Last edited: Sep 27, 2016
    btw while fleet learning helps Tesla autopilot a lot, it might be a good idea, having huge amount of data ( what vehicles 'see' on the road ), to create a vehicle simulator which one one hand - simulates a tesla car with the use of real data and decisions being made by autopilot and reproduce those strange AP decisions which users report here on forum and fix them

    Then in would be also possible to create new dangerous situations in addition to what was viewed by real cars.

    Then Tesla AP might progress much faster training neural networks in this semi game like simulator.

    as for creating a 3D world which is close to what vehicle sees via radar then some ideas could be taken from such projects as OSM-3D Globe. Public elevation data could be merged with street maps ( openstreetmaps is used in a link ) and then a virtual Tesla with all other data can be placed there. Another example is here osgEarth — OSGeo-Live 10.0 Documentation
    Non free elevation data can be as good as 2m resolution AW3D World 3D Topographic Data - Global Digital Elevation Model
     
  3. Zybd1201

    Zybd1201 Member

    Joined:
    May 30, 2016
    Messages:
    229
    Location:
    California
    This is already being done by tons of people including tesla.
     
  4. kurdakov

    kurdakov Member

    Joined:
    Jul 25, 2016
    Messages:
    30
    Location:
    Moscow Russia
    #4 kurdakov, Sep 28, 2016
    Last edited: Sep 28, 2016
    and how do you know?

    those silly errors ( like AP does not slow down when someone cuts from another lane and only few inches are left just in front of Teslas ) or strange behavior near big trucks could be learnt from simulations and eliminated - but still those errors persist

    more: that approach to use depth buffer like in article with VizDoom is quite innovative and was not used much earlier ( at least in game agents training )

    so I'm sure many companies use simulators for engineering ( I made some simulations for VW back in early 2000s working online for small german research firm ) but still - it does not look that those possibilities are used for full extent because somehow those errors which users report could be caught in well thought in prepared simulations - but they are not caught and eliminated
     
  5. kurdakov

    kurdakov Member

    Joined:
    Jul 25, 2016
    Messages:
    30
    Location:
    Moscow Russia
    another take on this.

    up to version 8.0 radar was a secondary detection device which did not produce '3D' info - rather a 3D cloud. Temporal smoothing which provides something like 'object contours restoration' was announced just in this summer.

    Thus - there were no reasons to make simulations with 3D information like depth buffer - that was not applicable to what Tesla 'seen' via camera and radar.

    And just a visual simulation to get some parameters tuned - is quite different - it could not be such helpful as it is possible now: merging radar 'smoothed 3D world representation' and nearly identical representation from depth buffer of 3D rendering with detailed elevation data and merged streets and houses from vector maps. Now - it is really possible to tune actual radar AI with game engine.

    While with previous approach ( third party camera with proprietary detection algorithm - which just output detection now saying how and radar which provided just secondary data about surrounding world ) that would be almost useless to tune actual AI onboard of real Teslas with computer renderings and simulations - that could provide some design hints, but not actual tuning of real device.
     

Share This Page