Setting a driverless course for the future

Robotaxi revs its engine

Sensor visualisation

As windy autumn curled into frigid winter on the streets of Tokyo, curious crowds gathered at select street corners in the busy Shinjuku district. Holding up their phones and tablets to film a small, black, well-accoutred taxi with two smiling passengers in the back seat, they marveled and pointed out to each other that no one sat behind the steering wheel.

Yet the taxi rolled smoothly out of the luxury hotel’s parking lot onto the busy street and away in traffic.

Autonomous driving had made its auspicious debut in one of the busiest metropolises in the world. The successful late-2020 tests of the Robotaxi were designed to check the safety, comfort and punctuality of the self-driving car, an increasingly likely conveyance of the future.

The very near future. The companies involved hope to put such taxis into actual street operation in 2022 or soon thereafter.

Among many comments collected by the Tokyo press from more than 100 passengers who took the driverless rides, the vice governor of Tokyo said, “I felt safer than when I was in my friend’s car. It was a comfortable ride.” Clearly, this was a significant milestone for safer mobility in Tokyo and for autonomous vehicle technology all over the world.

PACMod3 drive-by-wire system

A drive-by-wire (DBW) system complements or replaces a vehicle’s mechanical controls such as the steering column, brake pedal and other linkages with electromechanical actuators that can be activated remotely or autonomously. In the current version of the Robotaxi, the DBW system sits alongside the traditional mechanical linkages so that, for the present at least, a human safety operator can retake command of the test vehicle if necessary.

“The scope of our work for the Robotaxi was installing our drive-by-wire system for the Toyota vehicle in the Japanese market,” recalled Lee Baldwin, segment director, core autonomy at Hexagon’s Autonomy & Positioning division.

“We took delivery of the Tier IV taxi at our headquarters in Morton, Illinois. We had to develop an interface to the onboard systems so we could provide a DBW system to Tier IV.

“The vehicle has a steering system, a braking system and an acceleration system. We have to tap into those so we can alter them and receive feedback through those systems. We do that through the PACMod3” (figure 1).

The Platform Actuation and Control Module (PACMod), a proprietary system designed and built by AutonomouStuff engineers, provides precise by-wire control of core driving functions and ancillary components. It can be fully customized to accommodate a wide range of applications aboard any vehicle.

PACMod controls by-wire the accelerator, brakes, steering and transmission. It also sends commands to the turn signals, headlights, hazard lights and horn.

It has a controller area network (CAN) bus interface and collects vehicle feedback for subsequent analysis, which is particularly important in the R&D vehicles for which it is primarily designed: factors such as speed, steering wheel angle, individual wheel speeds and more.

Finally, it has in-built safety design with intuitive safety features, such as immediate return to full manual control in urgent situations. This qualifies it for road approval in Europe, the U.S. and Japan.

AutonomouStuff’s third iteration of the system, PACMod3, has three years of accumulated experience and a considerable amount of road-test mileage under its many belts (figure 2).

For example, PACMod sends a command to the steering system to steer at a certain trajectory. It receives feedback from the vehicle that the command actually occurred, similar to a handshake. This is necessary because, in the event that it doesn’t happen, the DBW system is disengaged so the safety driver can take over (figure 3).

“To take control with DBW, that’s simply controlling the vehicle,” Baldwin continued. “The lowest level is the steering electronic control unit (ECU), the computer that controls the steering wheel. We’re the next layer up, that the autonomy layer can access. The autonomy layer is after all that. You give somebody a platform and say, ‘This is robot-ready, you can control it.’”

“The other thing we provided was a speed and steering controller (SSC). The SSC is a piece of software that sits on top of the DBW system, in between it and the autonomy stack: the perception system, the planning system and a positioning system. Autoware is the perception, planning and implementation system that interfaces with the sensors: the GNSS from Hexagon | NovAtel, the cameras and the LiDAR.”

The autonomy system is the one that replaces the driver. The SSC interprets the commands from the autonomy system and directs commands to the DBW system.

The DBW must be custom-tuned to the specific vehicle, so the operator or the DBW knows how much to turn the steering wheel: a lot or a little. Different car makes and models have dramatically different responses, and even within the same model, individual vehicles vary in that respect. Human drivers sense this quickly when taking the wheel of an unfamiliar vehicle. The DBW system needs to be told, or properly tuned.

“You need that to parametrize a lot of different things so it behaves smoothly, in a human-like manner,” Baldwin said. “We tweaked the SSC, which mimics the human behavior. We simulated what the autonomy stack would do, in order to tune the SSC for different speeds.”

AutonomouStuff engineers installed the DBW and the SSC, then shipped the Robotaxi back to Japan. Tier IV took it to their shop, and installed the many sensors and the company’s own proprietary version of Autoware.

PACMod3 has a safety layer built in; if there’s a fault, it disengages. Fault checking for the actual health of the DBW system occurs constantly, and the feedback loop ensures commands were actually carried out.

Autoware, the brains commanding the DBW body

Autoware consists of modular, customizable software stacks, each with a special purpose within the autonomous vehicle. At its top level, control, it formulates the actual commands the system gives to the actuators via the DBW system to achieve what the planning module wants the vehicle to do to get from point A to point B. It includes modules for perception, control and decision-making. Its architecture makes each function an independent module, easy to add, remove, or modify functionalities based on particular project needs (figure 5).

Autoware supplies and supports the world’s largest autonomous driving open-source community, developing applications from advanced driver assistance systems (ADAS) to autonomous driving. The software ecosystem is administered by the non-profit Autoware Foundation, which has more than 50 corporate, organizational and university members. Tier IV and AutonomouStuff are members and core participants of the foundation.

“Autoware is used in projects in the U.S., Japan, China, Taiwan and Europe,” said Christian John, president of Tier IV North America. “All the learning, testing, debugging, all that experience comes back into the open-source platform. Everyone benefits from those enhancements.

“It takes a large number of partners to implement and deploy an autonomous technology. The various sensors, the LiDARs, the cameras, the ECUs running the software, all these have to come together to implement autonomy.”

Autoware.AI, the original Autoware project, was built on the Robot Operating System (ROS™ 1) launched as a research and development platform for autonomous driving technology, and is available under Apache 2.0 license. A further iteration, Autoware.Auto, is based on the more complex ROS 2 and is targeted for commercial applications of autonomous driving requiring higher performance, security, and safety. 

Autoware.Auto integrates sensors’ data to provide absolute (GNSS) and relative (inertial and LiDAR) location, perception (camera), planning, control and adherence to high-definition mapping components. A sensor-agnostic and ECU-agnostic methodology enables advanced fusion techniques, producing higher accuracy, robustness and applicability. Industrial users can port, modify and customize their version of the software to adapt to their hardware, performance and operational design domain (ODD) requirements.

Autoware.Auto source code and documentation enable development work to get underway quickly. Modularized code makes the overall structure comprehensible, as well as simplifying test, modification, and integration of additional functionality. Today, Autoware.Auto is being used in fully autonomous applications including cargo delivery, last mile transportation, and mobility as a service (MasS) platforms. 

The primary job of ROS is to bring sensor data, including those mentioned above plus wheel odometry and other elements from the vehicle’s  CAN bus, into the computing process and into the core functions of the autonomy software stack. For example, sensor data is used by programmers to develop various simultaneous localization and mapping (SLAM) algorithms to identify the vehicles position and direction, or what is commonly referred to the Pose of the Ego vehicle. Autoware’s control output to the vehicle is a spatial velocity, or twist of velocity and angular velocity. 

AutonomouStuff, Tier IV and the other foundation partners continually develop upgrades and new features within Autoware and share them to help customers with their work. Recent enhancements include traffic- light classifying software, object detection and avoidance algorithms, and dynamic path planning.

AutonomouStuff and Tier IV have worked together since early 2020 in a strategic partnership to create, support and deploy autonomy software solutions around the world and across a variety of industries.

Making decisions

Overall success in autonomous vehicle control requires solving a plethora of sub-problems. The thorniest may be decision-making. Naturally, this function is performed by the brain; in this case Autoware. It must connect perception (sensors) with action (commands to the DBW) to control the car and move it forward through the environment. For perception, a human can easily identify different objects in front of or adjacent to it. For control, a human can plan and follow a trajectory without much effort. It’s not nearly so easy for an autonomous car (figure 8).

For autonomous vehicles, decision making usually starts with high-level choices that must then be implemented by a motion planner to create the trajectory for execution by the controller. The car’s goals are to reach the destination, avoid collisions, respect road regulations and offer a comfortable ride. The challenge comes from finding decisions that best comply with these goals.

If a road situation becomes too complex, doing nothing becomes the only safe decision; the car will stop, putting it in conflict with the reach-destination goal. Sensor input may be faulty; surrounding cars or pedestrians may behave unexpectedly; the perceived environment may be blocked or obscured in some way by sharp corners or large parked vehicles. Many factors can create uncertainty that a human may be able to balance or rationalize, whereas a computer and software, however sophisticated, are neither so subtle nor so sensible.

Autoware employs mixed-integer programming (MIP), an optimization technique that maximises a linear function of variables that are under some constraints. A route-determination problem can then be worked for an optimal solution corresponding to the best values to assign to the variables, such as whether to change lanes, while satisfying the constraints. While MIP processes can solve many complex driving situations, they cannot process uncertain or partial data; thus the extraordinary difficulty with ordinary real-life situations.

Public demonstrations

In November and December 2020, Tier IV and its partners conducted a total of 16 days of public automated driving tests in Nishi-Shinjuku, a busy commercial centre in central Tokyo. Government officials and members of the public were recruited as passengers to ride and comment. The routes ranged from 1 to 2 kilometres in traffic (figure 8). On some days, a safety driver sat behind the wheel, ready to take control if something unexpected happened (it never did). On others, the driver’s seat was empty; a remote driver monitored screens showing the vehicle’s surroundings and progress, ready to assume remote control (figure 9).

The November tests ran along a single predetermined route. The December tests allowed participants to choose among three different departure and arrival points on their smartphones, summon the taxi to them and then ride it to their desired destination. Therefore, the vehicle had to compute and decide among a large number of potential routes, making the implementation more challenging.

A total of more than 100 test riders participated. It was an intensive learning experience for the designers and operators of the Robotaxi, as some of the Tier IV engineers subsequently recounted in online blogs about the the demonstrations. The challenging environment and conditions of Nishi-Shinjuku—heavy traffic, many left and right turns, lane-change decisions and more—combined to fully test Robotaxi’s capabilities.

One of the unexpected lessons learned concerned false detection of obstacles. High curbs on some roadsides and even accumulations of leaves in gutters created problems for the perception system. Autoware is programmed to recognize what must be detected, such as automobiles and pedestrians, and to distinguish these objects from others such as rain or blowing leaves, which can be ignored. However, this remains a work in progress. Differentiating between falling leaves and objects falling off the back of a truck is not as easy for Robotaxi as it is for human eyes and brains.

Another problem indicating future work to be done was the performance of unprotected turns at non-signalled intersections, when either the view of oncoming traffic was obscured (something that calls for a good deal of human judgment and quick reaction) or the rate of approach of the oncoming traffic was difficult to estimate. To compound this, Autoware is programmed not to accelerate suddenly to take advantage of a gap in traffic as a human might do; the comfort and ease of passengers has a value. Such balances of conservativeness and aggressiveness, so natural to humans, can be difficult to achieve in a programmed system in heavy traffic.

The LiDAR sensors also experienced occasional difficulty in environments without distinctive features, such as open park areas and tunnels. Further, the relatively high expense of LiDAR sensors may create difficulties at mass-market scale, when many vehicles need to be outfitted.

To solve this, some Tier IV engineers blogged that they are experimenting with a technique called Visual SLAM, using a relatively inexpensive camera coupled with an inertial measurement unit (IMU) in place of a LiDAR sensor. This creates a map using visual information and at the same time estimates its own position in the map. In addition, a technology called re-localization, which estimates where you are in a map created in advance, is being actively researched. But Visual SLAM has its own challenges: it does not operate well in darkness, nor with many simultaneously and divergently moving objects.

Scaling up to the future

Nevertheless, Tier IV and AutonomouStuff relish the challenges. “A lot of innovation is happening in this space,” John said. “The OS (Open Source) allows many players to bring their solutions into the ecosystem: cost, power consumption, safety architectures—these efforts are bringing in the best-of-class solutions and players.

The fast-developing driverless vehicle market features a lot of players and many variants of sensor combinations and integrations; some are expensive, some less so.

“Other companies are very vertically integrated,” John added, “developing their own software stacks, as opposed to our approach to open source. The market is still pretty early from the standpoint of mass deployment adoption. Some players have been able to demonstrate full Level 4 and deploy in limited markets.

“But at the same time, to really scale up their approaches, there’s another significant round of system optimization that needs to occur. One thousand watts plus of computer power in a car and sensor integration of $100,000 USD per vehicle: this just doesn’t scale to tens of thousands of vehicles across many cities.

“That’s why there’s now all this investment: solid-state LiDAR, imaging radar, all these things that continue to advance perception capabilities. This means new solutions must be integrated and optimized within our perception stack. Then, once you’ve made these changes, how do I verify my new system? How do I demonstrate that it still meets safety requirements?”

John had final words for the future. “To me, it feels like everybody has demonstrated they can make the software work in limited deployments. To scale, it will take another significant investment to redesign and validate the systems, and open source will play a significant role here in optimizing next-generation AD solutions.”

Read the full PDF here: