The sixth of our 2024 AI Predictions Mini-Series touches on AI and the critical concept of human-in-the-loop. It marks the move away from the perception of AI as a replacement for human endeavors and instead envisions how humans are essential for the safe, effective development of AI and to maximize the potential of humans and AI working together.
– In 2024, the concept of “Human in the Loop” (HITL) in AI systems will become more nuanced and clearly defined.
– AI developers and users will have a deeper understanding of humans’ critical role in fine-tuning AI models, ensuring ethical use, and handling complex edge cases.
– The synergy between AI and human expertise will lead to more responsible and effective AI applications across various domains.
Understanding AI + HITL: Collaboration and Augmentation
One of the overriding responses to generative AI has been the fear that AI will replace human roles. This initial response is now shifting to an understanding of how humans are essential for the continuous beneficial development of AI and an anticipation of the opportunity for human + AI collaboration as AI augments rather than replaces human activities.
The concept of human-in-the-loop (HITL) has two key facets. Firstly, how humans are essential for training, supervising, and testing AI output, and secondly, how humans will continually work side-by-side with AI to maximize the outcome of this still-emerging technology.
Tuning and Testing
The development of influential machine learning models and AI systems should rely on the human-AI interaction of human-in-the-loop. Although the approach will differ for every AI development, the premise can include humans setting up the system, tuning and training the model, providing feedback on AI’s responses, refining perimeters or adding restrictions and re-tuning, providing new data, and again reviewing outputs.
The result is a continuous feedback loop that teaches the algorithm and leads to improved, safer results.
HITL is essential; if AI is too “self-sufficient,” there are substantial risks, including “model collapse” and:
- Falsification, misinterpretation, and lack of contextual understanding
- Inappropriate responses
- Inability to learn from feedback
- Laziness or failure to apply knowledge
- Bias, discrimination, and ethical concerns
Tuning and testing AI is essential. It makes systems smarter and more accurate and mitigates risks by addressing ethical considerations, bias, and accuracy.
Across the many applications of AI, there are sectors where problems or errors will cost more than bottom-line profits and where leaders are skeptical. In these use cases, determining the level and oversight of HITL is even more vital.
Adding HITL, even for basic applications of AI, can more safely speed up the deployment of AI for companies afraid of missing out but leary of leaping right in.
Working Together
McKinsey, in a latest podcast write-up, begins the prose with:
“Humans in the loop: It’s the angst-ameliorating mantra for the new age of generative AI”
For the ground-level operations of a business, human employees are less likely to tune, train, and test models and more likely to use off-the-shelf systems to automate certain workflows or content creation. HITL here sees employees learning how to get the best out of AI models, how to identify issues, and double-checking or interpreting AI output as well as dealing with more complex or expert scenarios themselves.
AI is safer where humans always execute the outcome of the system’s work or recommendation, but advances in AI will raise the question of how much a human should be in the loop.
The answer isn’t simple, and as the capabilities of generative AI advance and become clear, defining the HITL role should get easier. The level of HITL, as a minimum, will depend on the complexity of the AI use case, the specialism or expertise of the augmented role, and the impact of an error.
Wharton School professor Lynn Wu, speaking to high-school graduates as part of the Wharton Global Youth Program’s Cross-Program Speaker Series, tells students:
“Having a human-machine collaboration is a new way to organize firm activities. That’s where you guys come in. You’ve got to figure out how we marry machines and humans in a new way. That is the future of our economy.”
Wu described the case of DHL using AI to improve shipping efficiency and says that DHL found AI systems “never got it right entirely,” explaining:
“Humans always had to monitor what was going on because machines can’t solve many of the important edge cases – things on the edge, on the border, unusual events. The edge stuff matters a lot, and machine learning is not good at edge cases. Humans had to monitor that and teach AI about how the edge cases went wrong.
Through human-machine collaboration, DHL was able to significantly improve the efficiency of loading palettes onto their cargo planes and cargo trucks. Key to this process was a continuous feedback loop, where humans improved on something, AI learned from it, and then told humans what else was important.”
Wu says AI needs to be thought of as a human augmentation tool rather than a replacement tool or a substitution tool.
Edge-case handling will also be a prevalent feature in AI in 2024 as developers look to improve models and humans learn to work with AI. Edge-cases are data deviations, unusual and outlier scenarios, and other such situations where human input and oversight are necessary and which can also be used in the AI feedback loop to improve future performance.
The fifth in our 2024 AI Predictions Mini-Series: AI to Disrupt at Least 30% of Customer Support Norms and CustomGPT for Customer Support: The Next-Level Consumer Experience both discuss AI’s potential to augment roles rather than replace them.
5 Comments
I think that the knowledge from AI is also the things that humans teach it. So we should not overuse AI.
The post highlights an important trend in the evolving role of AI and the recognition that humans are indispensable for the responsible development and deployment of these powerful technologies
In my opinion, AI learns from the things that people teach it. Thus, we ought not to abuse AI.
It seems to me that AI learns the same things that people teach it. That’s why we shouldn’t use AI too much.
In 2024, the HITL paradigm will not only refine AI’s performance but will also foster trust in AI systems, as people are reassured by the continuous involvement of human judgment.