As reports of GTA VI leaks including investigations and updates continue, there were reports in recent weeks, on autonomous vehicles on their capability. What is important for vehicles, autonomous or not, is safety amid crashes and losses.
Car crashes are numerous in spite of collision avoidance system, smarter infrastructure, newer sensors and so on.
This explains that whatever the capability of those products or features, they don’t assist the vehicle to understand what it means to crash, crumple or cause harm.
AI is an excellent memory for products and devices. It takes in a lot of data and makes exceptional inferences, but for what it means to get struck, fall or to know what if means to be put besides something else, it does not have the consciousness of those experiences.
Consciousness is generally described as what it means to be and to know, or simply being and knowing. Knowing is not just to tell the weather, display images, alarm for seat belt, indicate distance or levels, win at games but to have subjective feelings or phenomenal experiences, as extended stretches of what can be known.
It is possible to know what it means to be cold, or to have empathy for someone cold, without the experience of it. It is also possible to feel cold while it is cold. It is possible the cold becomes an emotion of cool or distress, it is also possible that reaction to the emotion can be parallel or perpendicular.
So while intelligent devices have different forms of efficiency, they don’t know what it means to feel. Sensors can tell, but can crash with the vehicle or be inflamed and feel nothing, even though they told of the risks.
So how does fear work in the human brain that makes determination for the emotion, such that it is possible to avoid or escape danger?
Sensory input - conversion to a new identity where they processed or integrated in the brain - relay to memory, where they transport to locations in sequences that tell what is abnormal about the situation, and the anomaly may give the what-if, or feel-like, before sending forth - to destination for actual feelings - then reaction.
Though it happens so fast across stages and properties of the relaying quantity, fear gets determined by what is known, mostly about the situation.
There are risky experiences that don’t cause fear because they have never gone bad for the individual before. There are others that elicit fear for some other people because they were aware of what could go wrong.
So there is a way the memory decides what becomes fear that everything reacts so the person finds safety.
Computers have memory, but maybe the tasks for memory aren’t how they can process their own fear emotion, but tell of what should make the individual fear.
Individual fear has not been effective to prevent car crashes, so vehicles need to have theirs to process the risk and consequences as inputs from navigation arrive.
All the ways a vehicle can crash can be stored in the memory of a sensor, including what happened before, what would result to the passengers, the vehicle, bystanders, the surrounding and so on.
This data, not just of ends but leads to ends, including with sequences, could become a path of safety for vehicles, across highways.
The sensor itself can have a ‘neuroplasticity’ quality, so that as risk builds, it can begin to change or adjust - expressing with that, while sending out warning of the closeness to danger.
This parallel sensor would be in different areas, understanding scenarios with nothing taken for granted or excluded.
It could become a new way to mirror how fear works in the brain, to hand vehicles safety. The sensor would be applicable regardless of the Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS) property of a vehicle.
There are several talks about building conscious systems, or emotional AI, but the important and urgent need is in vehicles for safety.