Research
The project will be based on a number of integrated research work packages which will address the underpinning research challenges. Context awareness (WP1) will be the foundation for more rapid deployment (WP2) and better collaboration (WP3). Self-organisation (WP4) will be the dynamic link to achieve responsiveness and resilience on a system level. All results will be systematically tested and demonstrated (WP5) based on co-created manufacturing challenges.
Work package 1
Real time context awareness for responsive robotic services
For automation services to be easier to deploy, setup and respond to changes, they must be able to identify, locate and track key product features, objects and people in real-time. This requires a new architecture to integrate heterogenous sensors (cameras, range sensors, 3D vision, etc) in a ‘plug-and-produce’ style; a digital toolset for heterogenous sensor calibration, registration and system design optimisation; and new responsive data processing algorithms that can be easily adapted to robustly detect, locate and identify new objects of interest.
Deliverables:
D1.1: Architecture for real-time awareness in manufacturing settings. A new conceptual framework will be created to define the necessary requirements (e.g. self-x functionality, communication protocols, data-structures, and hardware) of complex heterogeneous sensor networks for real-time factory awareness. These will be formalised using system modelling and architecture specification languages (e.g. Sys-ML, xADL). The architecture will be validated through industrial case-studies (WP5).
D1.2: Toolset for awareness system management. New procedures for calibration and registration of heterogenous sensors will be developed based on natural feature extraction augmented with new multi-sensor calibration artefacts. To optimise the sensor placement and calibration procedures, a spatial and temporal voxel model, with virtual sensors will be created. The voxel approach will be used in combination with sparse search optimisation algorithms to allow for fast system design and modification. Then high-resolution digital-twin scene models will be used for fine tuning and digital system validation.
D1.3: Responsive AI for robust feature extraction and fast algorithm training. The calibrated multisensor approach will be used to achieve real-time but accurate feature extraction from raw data. The fastest, but therefore lower accuracy, AI algorithms for feature extraction will be deployed on high performance edge processors. Extracted feature data, quantity significantly reduced, will be combined using 3D triangulation and consensus-based data fusion to boost accuracy. To enable responsive and fast training of the AI, given new unseen features, a combination of a prior feature information (e.g. CAD models), initial human training, and 3D reprojection will be used to create accurate self-labelling algorithms to create large quantities of training data with minimal human intervention.
Work Package 2
Rapid task adaptation and dynamic control methods for robust task execution in changeable environments
Minimising setup and changeover time depend on both fast generation of task execution strategies as well as fast parameterisation (tuning) of these strategies based on tracking of key process features (WP1). The use of programme reconfiguration and adaptive control will explored using both machine learning methods and contextual awareness. Learning from demo approaches will be investigated to allow operators to naturally setup (seed) new tasks. This will remove the need for time-consuming and unintuitive guide-through programming or tele-operation. Self-calibration methods will be explored based on adaptive control and machine learning principles using contextual responses from the environment to achieve higher level task autonomy.
Deliverables:
D2.1: Observation-based learning for more natural human task demonstration and changeover. Robotic tasks can be broken down into a combination of simpler tasks called primitives. A method will be developed to observe a human performing a task then extract the primitives used. Tasks and primitives are normally subject to constraints e.g. physical limits of a robot, or restrictions of the task, e.g. contact forces cannot exceed certain limits. The online estimation of these constraints from observations will lead to a systematic learning from demo approach for industrial tasks.
D2.2: Self-calibration methods for rapid contextual task adaptation. Observation based learning can be extended to estimate primitive variation weights (priorities) e.g. variations in manipulator paths much less important than that in angles when a group of robots handle a large fragile object. Methods for rapid updating key relevant parameters in models used to representing tasks and primitives (e.g. Gaussian Mixture Regressions) will be investigated. In addition, by demonstrating the task in multiple ways, we can generalize task constraints to novel situations allowing for rapid task adaption.
D2.3: Robust contextual task control methods for changeable work environments. This will be achieved using constraint-based adaptive control techniques to maintain a desired level of performance with uncertainties in the environment, while adhering to the constraints derived from the observation-based learning). Goal-oriented control systems (GOCS) will be developed to ensure effectiveness and robustness through building understanding of the interaction between service robots and their operation environment while adapting to changes in environments/task.17
Work Package 3
Save shared control methods for mixed human-robot teams
Distributed control strategies for collaborative execution of complex tasks by mixed humanrobot teams will be explored at sub-system (task) level (O2). These will build on local contextual awareness (WP1) and dynamic task execution strategies (WP2). New autonomous and distributed control algorithms will be investigated to create more natural human-robot interaction methods and more robust shared task control. Ad-hoc formation of mixed teams will explore biological systems inspired collaboration and self-organisation protocols combined with rapid contextual learning to adjust for individual group dynamics. Principles derived for modularisation and motion-primitive based assignment and coordination of complex tasks in human-robot teams will maximise adaptability. The objective is to achieve robustness to internal and external change.
Deliverables:
D3.1: Human object manipulation detection method. Multi-modal methods (vision, force/torque, electromyography, electroencephalogram, contact pressure) will be used to extract rich features from typical human interactions with objects and devices. Feature extraction techniques will be used to classify human operator intentions.
D3.2: Contextual safety approach for mixed humanrobot teams. Online safety analysis module that assesses risk levels captured offline based on contextual observations of the mixed team in its workspace (WP1). A Hindsight-Experience-Replay (HER) architecture will used to train a deep neural network to recognise risks based on simulated and physical observations.
D3.3: Shared human-robot control strategies for collaborative task execution. Modulation of distributed impedance behaviours of robots working collaboratively based on human intention (T3.1) and contextual awareness of risk (T3.2). Contextual uncertainty characterisation will be used to increase robust shared control as part of the individual robot GOCS (T2.3) and address non-rigid component/person interactions.
Work Package 4
Distributed dynamic scheduling for responsive and resilient human-robot task allocation
Responsiveness at system level is dependent on effective coordination and collaboration between diverse actors in response to both planned and unplanned changes. Self-organisation methods combining agent-based approaches with genetic optimisation will be investigated to maximise both resilience and productivity. New meta-heuristics to maximise fast adaptation will be investigated. These will be combined with social and biological systems inspired protocols to establish emergent resilient behaviour in highly complex, multi-actor systems. The pareto-fronts resulting from central multi-objective optimisation will provide the basis to drive optimality in distributed agent decision policies. To achieve balance between responsiveness and productivity, the emergent properties of a wide array of distributed decision policies and agent Belief-Desire- Intention models will be explored.
Deliverables:
D4.1: IRaaS Service-based systems architecture for distributed dynamic manufacturing system control. Latest cyber-physical systems architectures for industry 4.0 will be explored in the contexts of RaaS for industrial robots. The architecture will balance embedded local intelligence vs. cloud-based reasoning to maximise responsiveness and resilience.
D4.2: Distributed self-organisation protocols for resilient human-robot systems. Establishment of baseline reference models for representative RAS scenarios (layouts, constraints, disturbances). Adapt an exponential priority aging policy (PAP)1 based hybrid self-organisation protocol between distributed service-agents. Investigate metrics for systematic benchmarking of responsiveness and resilience.
D4.3: Meta-heuristics for multi-objective scheduling for distributed responsive systems. Meta-heuristic based on operational and economic objectives will be explored. New mixed distributed and centralised decision policies will be developed that generate agent PAPs from multi-objective pareto-fronts.
Work Package 5
Demonstration in factory like environment
Three increasingly complex demonstrator case studies will be set up to systematically assess responsiveness and resilience of the rapidly deployable mobile robot concept under internal and external disturbance scenarios informed by industry partners. Demonstrators will be fully implemented in ROS2.0 and use a Data Distribution Service (DDS) framework for communication under real-time conditions. A digital-twin for all use case scenarios will be created to support remote working, virtual testing and wider open community distribution. A detailed technology readiness validation assessment will be carried out for each use case scenario with the partners.
Deliverables:
D5.1: Validation scenarios and design of demonstrators. Co-create detailed cross sectorial scenarios in workshops with partners.
D5.2: Use Case 1: Rapid deployment of robots (physical demonstrator). Alternating single robot arm and mobile robot being setup for a new task and/or in a new environment to measure the re-deployment time and use of local perception for adaptive task execution.
D5.3: Use Case 2: Collaborative mixed human-robot task execution (scaled down physical demonstrator). Joint task execution (e.g. carrying or assembly of large, semi-rigid objects) with a team of mobile robots (instead of dedicated fixtures) with and without the guidance of a human operator within semi-structured environments.
D5.4: Use Case 3: Responsive multi human-robot systems planning and scheduling (hardware-in-the-loop simulation demonstrator). Dynamic task allocation between mobile robots and human actors under several expected and unexpected change scenarios.
D5.5: Responsiveness and resilience of IRaaS. Investigation of the economic, social, and technical impacts results from the IRaaS model for the wider UK manufacturing ecosystem.