An Autonomous Vehicles Network with 5G URLLC Technology

5G URLLC (Ultra-Reliable Low Latency Communications) technology to build an intelligent Internet of Vehicles (IoV) network can be delivered using a MEC (Multi-access Edge Computing) architecture based on GIGABYTE's edge servers such as H242 Series equipped with vRAN and AI inferencing capabilities
A Key Application of 5G: an Autonomous Vehicles Network
Autonomous driving technology promises great benefits to the future of transportation - not only freeing us from the tedious task of driving to focus on our work, leisure time or rest, it but also eliminating human error or recklessness and improving reaction times, increasing the efficiency of traffic flow and decreasing the rates of accidents on our roads. It also promises to fully automate freight transportation and taxi services, removing the need for a human employee and making these services cheaper and more widely available where manpower is limited.

To fully deliver the benefits of this technology however, all vehicles will need to be connected both to each other and to roadside systems (such as traffic light systems, mapping and traffic monitoring / management systems, emergency services or road maintenance services). These connections need to be in real-time with ultra-low latency and ultra high reliability - and a new category of 5G service: URLLC (Ultra-Reliable and Low Latency Communications) promises to deliver this.
The Challenges of Enabling a URLLC Network
URLLC is a new service category of 5G aimed at mission critical communications, with a target latency of 1 millisecond and requirements for end-to-end security and 99.999 percent reliability. This new type of ultra-fast and ultra-reliable type of wireless communication will be ideal for latency sensitive applications such as autonomous driving, enabling vehicles to "go online" to share and receive data in real time both with neighboring vehicles (via vehicles-to-vehicles communications) and the surrounding roadside environment (via vehicle-to-infrastructure communications) .
4G LTE vs. 5G and URLLC Usage Scenarios
For example, in fully automated driving with no human intervention, vehicles can benefit from the information received from roadside infrastructure or other vehicles, to conduct automated overtaking, cooperative collision avoidance or high-density platooning,. And in smart intersections, vehicles can co-ordinate with traffic lights and other systems so that emergency vehicles can be prioritized as well as buses. All these applications all require a very high level of reliability and strict end-to-end latencies which only a URLLC communications network can guarantee.

However, to store and process the massive amount of data generated by intelligent vehicles, from their multitude of high resolution cameras and sensors (including RADAR, LIDAR, SONAR and GPUS etc.), as well as being able to achieve better safety than the best human driver by processing real-time traffic conditions within a latency of 100ms (the best human driver can take action within 100 ~ 150ms), using either onboard processing capabilities or cloud computing will not be sufficient.
Internet of Vehicles and Autonomous Vehicles
Onboard processing and storage capabilities are limited by resource and power constraints. For example, GPUs needed for low latency computation and inferencing have high power consumption requirements, further magnified by the cooling load to meet thermal constraints, which can significantly degrade the driving range and fuel efficiency of the vehicle. And local storage devices such as SSD can be filled within hours by sensor data. While onboard processing capabilities could be sufficient to manage interaction between passengers and the vehicles, they will not be sufficient to manage the workload between vehicles to vehicles or vehicles to infrastructure.

Meanwhile, the long latencies and massive data transmission bottlenecks also mean that cloud computing is not a sufficient answer to connect intelligent vehicles together into an Internet of Vehicles.
A Solution for an Autonomous Vehicles Network
suitable to Time saving
MEC-Enabled BBU Servers for Ultra-Low Latency Communications
suitable to Reliability, Consistency
GIGABYTE Server Hardware for Reliability & Redundancy
suitable to Flexibility, Scalability, production capacity
Network Slicing for Diversified Service Requirements
MEC-Enabled BBU Servers for Ultra-Low Latency Communications
The answer will be to deploy storage and computing resources at the wireless network edge, including edge caching, edge computing and edge AI, using a MEC ("Mobile Edge Computing" or "Multi-access Edge Computing") network architecture, running on Baseband Unit (BBU) servers at base stations or radio access points along the roadside. A cloud-native BBU server is fully software-defined and reconfigurable on demand, and can be adopted for both CRAN and DRAN installations. It can run Virtualized RAN (vRAN) services (with Network Functions Virtualization) together with MEC applications such autonomous driving, mapping and others deployed by the vehicle manufacturer, the traffic department or other third parties.
Traditional Network Topology vs MEC
GIGABYTE Server Hardware for Reliability & Redundancy
One of the most important requirements of URLLC network is reliability and redundancy. This is fulfilled on many levels, such as the software, application, networking, transmission and physical levels. A cloud-native BBU server virtualizes networking, routing, switching and security functions of the wireless radio network, ensuring a high level of redundancy and reliability since containers or virtual machines can easily be re-generated and hot standbys are always available in case of failure.


Typical MEC Hardware + Software Stack
On the physical hardware level, GIGABYTE's H242 Series server features a multi-node design to ensure adequate physical redundancy – if one node fails, the virtualized environment immediately will shift the workload to the other physical nodes within the server, and the faulty node can be quickly and cheaply replaced for a new one.
Network Slicing for Diversified Service Requirements
Network slicing is an architecture that enables different virtualized and independent logical networks on the same physical network infrastructure, in order to meet diversified service requirements.

Network slicing enables network elements and functions to be easily configured and reused in each different slice to meet a specific requirement. One slice appears like a self-contained network that includes the core network and the RAN. However, each of these slices can have its own network architecture, security, quality of service and network provisioning as a result of the network virtualization software implementations. This allows one network slice to provide low-security, low-bandwidth services (such as for mMTC), while another slice can provide high-security, high-reliability services (such as for URLLC).
Network Slicing for Diversified Service Requirements: eMBB, URLLC, mMTC
The fully cloud-native architecture of 5G and MEC is ideal to support network slicing, allowing for a URLLC network to run on the same physical infrastructure as other 5G services to save on infrastructure investment and network operating costs. Network slicing is the key to meeting 5G's diverse requirements, and is a perfect solution to seamlessly integrate an autonomous vehicles network together with other applications with different network service requirements such as for eMBB or mMTC.
Conclusion
The realization of URLLC can empower several technological transformations in the transportation industry, including automated driving, road safety and traffic efficiency services. These transformations will get cars fully connected such that they can react to increasingly complex road situations by cooperating with others rather than relying on their local information. These trends will require information to be disseminated among vehicles reliably within extremely short time duration. A MEC network architecture running together with vRAN services on GIGABYTE's edge servers such as the H242 Series therefore will be an ideal way to enable an URLLC 5G network for an intelligent Internet of Vehicles.
GIGABYTE Server Products for URLLC MEC
GIGABYTE's H242 Series server running vRAN, MEC and AI services will be an ideal way to build a 5G URLLC network for an Intelligent Internet of Vehicles, to improve traffic flow efficiency, decrease accident rates and make autonomous freight and taxi services cheaper and more plentiful. This multi-node edge server includes support for GPGPU cards such as NVIDIA's T4, providing AI inferencing capabilities that will be particularly important for autonomous driving applications. 
1/2
H242 Series Multi-Node Edge Server
H242-Z10 (rev. 100)
Rear-access sliding node trays
2/2
H242 Series Multi-Node Edge Server
H242-Z11 (rev. 100)
Front-access sliding node trays
Related Technologies
Multi-access Edge Computing
What is Multi-access Edge Computing (Mobile Edge Computing)? Multi-access Edge Computing (MEC), also known as Mobile Edge Computing, is a network architecture that enables cloud computing capabilities and an IT service environment at the edge of a cellular network. MEC technology is designed to be implemented at cellular base stations or other edge nodes, and enables flexible and rapid deployment of new applications and services for customers. MEC is ideal to be used for the next generation of 5G cellular networks.
ADAS
Advanced Driver Assistance Systems (ADAS) constantly monitor the vehicle surroundings, alert the driver of hazardous road conditions, and take corrective actions, such as slowing or stopping the vehicle. These systems use inputs from multiple sensors, such as cameras and radars. The fusion of these inputs is processed and the information is delivered to the driver and other parts of the system.
Edge Computing
Edge computing is a type of computing network architecture, where computation is moved as close to the source of data as possible, in order to reduce latency and bandwidth use. The aim is to reduce the amount of computing required to be performed in a centralized, remote location (i.e. the “cloud”) far away from the source of the data or the user who requires the result of the computation, thus minimizing the amount of long-distance communication that has to happen between a client and server. Rapid advances in technology allowing for miniaturization and increased density of computing hardware as well as software virtualization have made edge computing more feasible in recent years.Learn More:《What is Edge Computing? Definition and Cases Explained.》
You have the idea, we can help make it happen.
Contact Us
Please send us your idea
 
 
 
 
 
 
/ 1000
 
* For services and supports, please visit eSupport.
* We collect your information in accordance with our Privacy Policy.
Submitted Successfully