Livium
  • Discuss

Humanoid Robot Glossary

Learn essential humanoid robotics terminology and technical concepts. From actuators and DOF to SLAM and AI, understand the technology powering the next generation of humanoid robots.

Showing 57 of 57 terms

A

Actuator

A mechanical component that converts energy into motion, enabling a robot to move its joints and limbs.

Actuators are the "muscles" of a robot. They can be electric motors, hydraulic cylinders, or pneumatic devices that create controlled movement in response to control signals.

Learn More →

AI (Artificial Intelligence)

Computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, and decision-making.

In humanoid robots, AI enables autonomous behavior, learning from experience, natural language processing, and adaptive responses to changing environments.

Learn More →

Autonomous

Operating independently without human control or intervention.

Autonomous robots can perceive their environment, make decisions, and execute actions without constant human guidance, though they may still operate within defined parameters.

Learn More →

B

Bipedal

Walking or moving on two legs.

Bipedal locomotion is a key challenge in humanoid robotics, requiring sophisticated balance systems, sensors, and control algorithms to maintain stability while walking.

Learn More →

C

Center of Mass (COM)

The point where the total mass of a robot can be considered to be concentrated.

Understanding and controlling the center of mass is crucial for maintaining balance in humanoid robots, especially during dynamic movements like walking or reaching.

Learn More →

Cobot

Robot designed to work safely alongside humans in shared spaces without barriers.

Humanoid cobots have safety features like force limiting and collision detection, enabling direct human collaboration.

Learn More →

Compliance

Ability to adjust force and position in response to contact with objects or people.

Critical for safe human interaction and handling delicate objects without damage through adaptive force control.

Learn More →

Computer Vision

Technology that enables robots to derive meaningful information from digital images or videos.

Computer vision allows humanoid robots to identify objects, navigate spaces, recognize faces, read text, and understand their visual environment in real-time.

Learn More →

D

Depth Camera (RGB-D)

A camera that captures both color images and depth information for each pixel.

Essential for humanoid robots to understand 3D space, measure distances to objects, and manipulate items safely in their environment.

Learn More →

Dexterity

Skill and precision in hand and finger movements for complex manipulation.

High dexterity enables humanoid robots to perform intricate tasks like tying knots, using tools, or assembling small components.

Learn More →

Digital Twin

Virtual replica of a physical robot used for simulation, testing, and monitoring.

Enables testing software updates, training scenarios, and troubleshooting without risking real hardware or disrupting operations.

Learn More →

DOF (Degrees of Freedom)

The number of independent ways a robot or robot joint can move.

A human arm has 7 DOF (shoulder: 3, elbow: 1, wrist: 3). More DOF generally means more flexibility and dexterity, but also more complexity in control.

Learn More →

Dynamic Balance

Maintaining stability during fast or unpredictable movements.

Allows humanoids to recover from pushes, walk on moving surfaces, or perform dynamic actions like running and jumping.

Learn More →

E

Embodied AI

AI that learns and operates through physical interaction with the real world.

Unlike software-only AI, embodied AI in humanoids learns through movement, touch, and direct environmental interaction.

Learn More →

Encoder

Device that converts mechanical motion into digital signals for precise position tracking.

Encoders provide continuous feedback on joint positions, enabling accurate movement control and closed-loop positioning in robotic systems.

Learn More →

End Effector

The device at the end of a robotic arm designed to interact with the environment.

End effectors include grippers, hands, tools, or sensors. In humanoid robots, these are often multi-fingered hands capable of complex manipulation tasks.

Learn More →

F

Facial Recognition

Identifying and distinguishing individual human faces for personalized interaction.

Allows robots to recognize users, personalize interactions, and enhance security through computer vision-based identity verification.

Learn More →

Footstep Planning

Determining where and when to place each foot during bipedal locomotion.

Critical for navigating uneven terrain, stairs, or cluttered environments by computing safe, stable foot placement sequences.

Learn More →

Force/Torque Sensor

Sensors that measure forces and torques applied at robot joints or end-effectors.

Enable safe human interaction and delicate object manipulation by detecting contact forces, preventing damage to objects or people.

Learn More →

Form Factor

Physical size, shape, and configuration of a robot body and structure.

Humanoid form factors range from child-sized service robots to full-size industrial units, affecting capabilities and applications.

Learn More →

Foundation Model

Large AI model trained on vast data that can be adapted for multiple robotic tasks.

Modern humanoids use foundation models like GPT for natural language understanding, reasoning, and task planning.

Learn More →

G

Gait

The pattern of movement during locomotion.

Humanoid robots must develop stable gaits for walking, running, or climbing stairs. Different gaits optimize for speed, stability, or energy efficiency.

Learn More →

Gesture Recognition

Ability to understand and respond to human body language and hand signals.

Enables natural, intuitive human-robot communication without voice commands or physical controllers through visual interpretation.

Learn More →

Gyroscope

A sensor that measures angular velocity and orientation.

Gyroscopes help humanoid robots maintain balance by detecting tilting or rotation, allowing the control system to make rapid corrections.

Learn More →

H

Haptic Feedback

Technology that creates tactile sensations, providing a sense of touch.

Haptic sensors in robot hands allow them to feel pressure, texture, and temperature, enabling delicate manipulation of objects without damaging them.

Learn More →

Hydraulic Actuator

Actuator powered by pressurized fluid for high-force applications.

Used in larger humanoids like Boston Dynamics Atlas for powerful, dynamic movements requiring high force-to-weight ratios.

Learn More →

I

Imitation Learning

Machine learning approach where robots learn by observing and copying human demonstrations.

Allows robots to quickly learn new tasks from human examples rather than requiring explicit programming or lengthy trial-and-error.

Learn More →

IMU (Inertial Measurement Unit)

A sensor that measures acceleration, angular velocity, and sometimes magnetic field.

IMUs are essential for humanoid robot balance and navigation, providing real-time data about the robot's orientation and movement in 3D space.

Learn More →

Inverse Kinematics

Mathematical process of calculating joint angles needed to achieve a desired end-effector position.

When a robot needs to reach a specific point in space, inverse kinematics determines how each joint must move to get there, solving complex geometric problems.

Learn More →

IP Rating

Standard measuring robot resistance to dust and water intrusion (e.g., IP54, IP67).

Higher IP ratings mean robots can work in harsher environments including outdoor, industrial, or wet conditions.

Learn More →

J

Joint

A connection point between two rigid parts that allows relative motion.

Robot joints mimic biological joints like elbows, knees, and shoulders. They can be revolute (rotating), prismatic (sliding), or more complex types.

Learn More →

L

LiDAR (Light Detection and Ranging)

A sensing method that uses laser pulses to measure distances and create 3D maps of the environment.

LiDAR enables robots to build detailed spatial maps, detect obstacles, and navigate complex environments with high precision, even in low light.

Learn More →

Localization

The process of determining a robot's position within its environment.

Accurate localization is crucial for navigation. Robots use various sensors (cameras, LiDAR, IMU) and algorithms to track their position relative to a map or landmarks.

Learn More →

M

Machine Learning

A subset of AI where systems learn and improve from experience without explicit programming.

Machine learning enables robots to recognize patterns, adapt to new situations, and improve performance over time through training on large datasets.

Learn More →

Manipulation

The ability to physically interact with and control objects in the environment.

Manipulation tasks range from grasping and placing objects to using tools, requiring coordination of vision, touch sensors, and motor control.

Learn More →

Motion Planning

The computational process of finding a path for a robot to move from one configuration to another.

Motion planning algorithms must avoid obstacles, maintain balance, and optimize for factors like speed, energy efficiency, or smoothness of movement.

Learn More →

Multimodal Interaction

Communicating through multiple channels including speech, gesture, facial expression, and touch.

Creates more natural human-robot interaction than single-mode communication by combining voice, vision, and physical interfaces.

Learn More →

N

Natural Language Processing (NLP)

AI technology that enables robots to understand and respond to human language.

NLP allows humanoid robots to have conversations, follow verbal commands, and communicate naturally with humans using speech recognition and generation.

Learn More →

O

Object Recognition

Identifying and classifying objects in the robot environment through computer vision.

Essential for task performance - distinguishing mugs from cups, doors from walls, or tools from random objects in the workspace.

Learn More →

Odometry

Using sensor data to estimate change in position over time.

Wheel odometry tracks rotations to estimate distance traveled. Visual odometry uses camera images. Both help robots track their movement between position updates.

Learn More →

P

Path Planning

Finding a collision-free route from a starting point to a goal location.

Path planning algorithms like A* or RRT compute efficient routes while avoiding obstacles, considering robot dimensions, and optimizing for various constraints.

Learn More →

Payload

The maximum weight a robot can carry or manipulate effectively.

Payload capacity is a key specification for humanoid robots, determining what tasks they can perform, from carrying boxes to operating tools.

Learn More →

PID Controller

A control algorithm that adjusts outputs based on Proportional, Integral, and Derivative calculations.

PID controllers are fundamental in robotics for maintaining desired positions, velocities, or forces by continuously correcting for errors.

Learn More →

R

Reinforcement Learning

Machine learning technique where robots learn through trial and error with reward feedback.

Used to teach complex behaviors like walking, grasping, or navigating obstacles without explicit programming of every movement.

Learn More →

ROS (Robot Operating System)

An open-source framework providing tools and libraries for robot software development.

ROS is widely used in robotics research and development, offering standardized ways to handle sensors, actuators, communications, and control algorithms.

Learn More →

Runtime

Duration a robot can operate continuously on a single battery charge.

Critical specification for commercial applications; most humanoids run 2-8 hours depending on activity level and battery capacity.

Learn More →

S

Safety Certification

Compliance with safety standards (ISO, CE, UL) for safe operation around humans.

Required for commercial deployment; ensures force limits, emergency stops, collision detection, and safe failure modes.

Learn More →

Sensor Fusion

Combining data from multiple sensors to produce more accurate and reliable information.

By integrating data from cameras, IMUs, LiDAR, and other sensors, robots can overcome individual sensor limitations and build robust understanding of their environment.

Learn More →

Servo

A motorized device that provides precise control of angular or linear position.

Servos are common in robot joints, offering accurate position control with feedback. They enable the precise movements needed for walking, reaching, and manipulation.

Learn More →

Sim-to-Real Transfer

Training robots in simulation, then applying learned behaviors to physical robots.

Accelerates development by testing millions of scenarios safely in virtual environments before deploying to real hardware.

Learn More →

SLAM (Simultaneous Localization and Mapping)

A technique for building a map of an unknown environment while tracking the robot's location within it.

SLAM is essential for autonomous navigation in new spaces, allowing robots to explore while creating spatial maps they can use for future navigation.

Learn More →

T

Tactile Sensing

Detecting touch, pressure, texture, and temperature through artificial skin sensors.

Advanced humanoids use tactile arrays for human-like touch sensitivity, enabling delicate manipulation and safe interaction.

Learn More →

Teleoperation

Remote control of a robot by a human operator.

Teleoperation allows humans to control robots from a distance, useful for dangerous environments, training, or tasks requiring human judgment.

Learn More →

Torque

Rotational force that causes an object to rotate around an axis.

Joint torque determines how much force a robot can apply. Higher torque enables lifting heavier objects or moving more quickly, but requires more powerful actuators.

Learn More →

Trajectory

The path that a robot or robot part follows through space over time.

Trajectory planning ensures smooth, controlled movement from start to goal, considering velocity, acceleration, and avoiding sudden jerky motions.

Learn More →

W

Whole-Body Control

Coordinating all robot joints simultaneously to achieve complex multi-limb tasks.

Enables humanoids to walk while carrying objects or performing multiple actions at once through unified motion planning.

Learn More →

Z

Zero Moment Point (ZMP)

A point on the ground where the net moment of forces is zero, used for balance in bipedal robots.

Keeping the ZMP within the support polygon (area of foot contact) is crucial for stable walking. ZMP-based control is a common approach for humanoid locomotion.

Learn More →
Livium

© 2026 Livium Inc. All rights reserved.

Privacy·Terms
HumanoidsCompaniesNewsDiscussGlossaryAboutNewsletterContact