Suppose AI is best utilized to augment human expertise, as posed in the first article of this series. In that case, organizations need to map how information flows and impacts decisions across the organization. We must align values and incorporate diverse perspectives that inform complex decisions, demonstrate trustworthiness, and prove that AI solutions are in the public’s best interest and collectively benefit the individual, group, and society at large.
It's time to make a case for shifting human-machine interactions from desirable consumer engagement to human expertise augmentation that benefits society.
Not an easy feat, but imagine if leaders had a dynamic system map to navigate through complex decision-making during uncertain times like a ship captain navigating through a perfect storm with a compass. We might discover new system archetypes and redefine processes to be more sustainable. We might be able to highlight misaligned values, inefficient processes and ineffective decision-making. We might be able to augment front-line workers in healthcare with super-tools that build expertise to provide exceptional care with less effort.
Our purpose with this article is to offer a new perspective on how we might think about and operationalize AI. We begin this journey by examining the definition of Good and contemplating a fundamental paradigm shift in how we design and build AI to be beneficial, responsible, and sustainable.
Good (Noun): That which exemplifies value and contributes to well-being in pursuit of ideal, worthy, and moral purpose.
As an agency with deep expertise in experience strategy and design, behavior change, systems thinking, human factors research, and a broad perspective on the development of human-centered AI applications, this is what we envision Good AI could look like.
What is Good AI?
At an operational level, we posit that Good AI could become a process-oriented approach to delivering responsible, trustworthy, beneficial AI solutions. A human-centered approach to augment Digital Transformation strategies and Agile development methodologies. A method for business strategies and behavioral interventions to emerge from the practice of participatory design and continuous learning. An application of systems thinking, humanity-centered principles, and innovative technologies for collective benefit.
We define Good AI as the intersection of the following value drivers:
Beneficial experiences that are inclusive and meet people where they are People consent and opt into experiences that inspire behavior change and empower human decision-making. Interactions encourage people to pay attention to their own biases, affinities, and risks and to consider unintended consequences when opting into an AI-driven system. For example, when opting into socio-technical systems, like social media, the individual understands targeted marketing will drive their personalized experiences, and the societal benefit (if any) is clear to the individual opting in.
Responsible innovation that protects humanity We should expand upon the responsibility of technology to do-no-harm; technology that protects people’s privacy, is fair, mitigates bias, is trustworthy, is held accountable, and is explainable by design. Inherently, we understand that social structures help define the context of proper and capable use in order to protect humans from misuse. For example, you wouldn’t give a kitchen knife to a child without proper instruction. It can be a dangerous tool, so proper skills are prerequisites for responsible innovation.
Sustainable growth through cooperation Business strategies and operational protocols create value from processes and relationships over competitive solutions for consumption. Institutional frameworks should support diverse perspectives; when we gain a deeper understanding of the mechanisms of inclusive design, we remove barriers to equitable use. For example, businesses should generate long-term incentive structures that strengthen a collective purpose and shift value from counting “things” to mapping connections.
Good AI is a paradigm shift and multi-disciplinary approach.
There are several challenges to applying AI in complex situations and framing systems that lead to Good decision-making. How do we make sense of a complex system - to confidently throttle the right levers to generate an intentional change with minimal unintended consequences? Answers to this challenge have led to predictive analytics and decision engines that recognize patterns in the data; such as executing a trade based on an automated signal, or the early diagnosis of a disease, detectable from a photo.
The new challenge, however, is the transparency of value alignment and the ethical use of decision-making at scale. How do organizations determine “whose best interest” and “whose values” drive delivery of AI technologies? Good AI’s goal is to visualize system dynamics - make transparent value drivers, decision points, tool interactions, data flow, and system boundaries - to understand how transforming the system aligns with collective values. Like an active trader monitoring the market through a Bloomberg terminal, a decision maker needs their dynamic system visualized to have the control to adjust the right levers in real-time, with higher confidence.
Operationalizing Good AI
A multidisciplinary team with objectives beyond engagement for consumption is the first requirement for Good AI. Delivering Good AI is designing learning environments for skill-building experiences before focusing on adoption and engagement. Organizations need to source a combination of experts from ethics, social science, systems thinking, human factors, psychology, and systems engineering to augment the technical team.
To operationalize Good AI, we reframe the role of user to partner, an individual, or a group who participates in an AI system. We also introduce a new role, an Innovation/AI Coach,who we define as a co-creator of socio-technical systems and champion for societal benefit. The AI coach is an evolution of the role of Product Owner, Business Analyst, Designer, or Developer. They become responsible for:
Mapping social system dynamics to cover a diversity of use scenarios
Translating human values and needs into AI requirements
Mitigating the risk of bias through governance
Explaining model outputs such that implications are easy to adjust
Generating interactions that are easy for participants to control from model parameters
3 Archetypes for Good AI
Three phases of how Beneficial experiences, Responsible innovation and Sustainable Growth values drive the operations of Good AI within a socio-technical system.
1: Discover-Anticipate-Define
Discover-Anticipate-Define phase to gather values and needs, incorporate future narratives as model inputs, and make transparent trade-off strategies. The illustration shows interactions between an AI coach and an individual with a specific objective or goal. AI is represented as data to be collected and transformed, patterns and structures from the data to define models, and trade-off strategy visualizations for generating hypotheses.
We apply a design thinking lens by expanding the Discover+Define phase of the double diamond to introduce Anticipate, a sense-making decision point to gather socio-technical design requirements. As a form of due diligence and discovery research, this step applies Anticipatory design methods to visualize the path from current needs to a future vision, adds a diversity-informed lens, and defines outcomes to avoid. We refocus questions on what is desirable and reduces friction, and dig into beliefs and decision making mental models. We then generate a visualization that can be synthesized into data narratives as input to AI models.
This Discover-Anticipate-Define (DAD) archetype integrates the “Design for Values” paradigm to assess data for impact using analogies. Intended use and unintended consequences are considered using participatory co-design methods, systems thinking and behavior change tools are leveraged, and behaviors are mapped to outcomes/goals for beneficial experiences to emerge.
2: Learn-Activate-Monitor
Learn-Activate-Monitor steps to optimize accuracy and personalize the experiences that will continually build trust when activated in the wild. Once activated, continuous monitoring ensues, helping the individual navigate through the journey.
The Learn part of Learn-Activate-Monitor, similar to a prototyping phase during product development, creates a safe space to fail. Just like a toddler learning to walk, our AI system needs to learn how to walk with the help of humans. During participatory design workshops, individuals improvise navigating through simulated journeys, learning how to tweak attributes and consider alternative approaches. However, just like we don’t throw a toddler in the deep end of a pool to learn to swim, learning happens together within the community. Like a child raised in a village, the model is exposed to diverse scenarios to improve its resilience in the wild.
When deciding to Activate (at scale), AI coaches assess the right conditions and check confidence levels. Deciding when to activate is like monitoring for traffic to subside before driving to a new location. To support deployment at scale, does the organization need to learn to drive a car first, or are they capable of jumping in and navigating themselves to a new location? Here, organizational maturity models help identify strategies for inclusive participation within a closed socio-technical system.
Opting to Monitor through a social contract, Partners navigate through a socio-technical system, learning within guard rails and adjusting to ambient monitoring. Here, AI and humans learn from each other, personalizing fit through feedback loops described in the next section. In the background, AI coaches steward deployment, triage interventions, and monitors for bias, like air traffic controllers. Systems monitoring dashboards help identify alignment issues and unintended consequences for model improvement, while validation research with an inclusive lens helps check for value alignment and estimates societal benefit.
3: Human-in-the-Loop
When a threshold is crossed, a signal triggers a human-in-the-loop intervention. In a way, the system pauses for fine-tuning, like stopping to power up an electric car. The human is in control of pausing the system and only reactivating the system when confident of the tune-up.
We can’t anticipate everything that’s going to happen or predict that when AI, even Good AI, is executed, it will behave as expected. Models shift over time, and in this phase, an AI coach may intervene, tweak parameters, and recalibrate the same way a car needs a tune-up once in a while. The AI coach would know to understand what Partners need to be comfortable. For example, does the new owner of a self-driving car need instructions on adjusting controls, or do they need to first unlearn previous skills before controlling a self-driving car?
This fine-tuning experience is a mutual-learning feedback loop designed to build human capabilities. This symbiotic relationship is created through knowledge sharing between AI and humans for mutual benefit. This form of personalization adapts with Partners over time, through informed consent interactions.
The Human-in-the-Loop phase creates space to reflect on the journey and pivot if necessary. The feedback loop embeds human expertise into decision-making systems, encouraging humans and AI to learn from each other.
Operationally, this self-sustaining system continues to learn and adapt to needs over time. We envision a state where users become partners and those who implement become co-creators of dynamic socio-technical systems for exploration, learning, and building personalized experiences.
As a strategic design agency, we believe in translating critical values into tangible outputs, and AI is no different. We hope that by framing AI as super-tools that augment human decision-making and upskill human capability, leaders and decision-makers can better understand how best to leverage AI for increasingly beneficial experiences that are within the control of humans.
In this insight series and in the culminating webinar: Designing Responsible AI Solutions, we explore design principles and experience strategies for implementing responsible AI technologies and what organizations need to do to create beneficial solutions that are good for people and good for business.
The annual HXD conference provides a unique crossroads for a diverse community of creators, practitioners, researchers, and developers, to help accelerate the transformation of our health system. Attracting over 500 visionaries across the health ecosystem, this event is created to drive real world change. Check out videos from the 2021 event including CNN's Chief Medical Correspondent, Dr. Sanjay Gupta's keynote "The State of Health."
The Center for Health Experience Design (CHXD) is a community that is designed to foster connection across the health ecosystem. It is only by working together that we can solve the toughest health challenges.
Our annual FXD conference provides a unique learning and networking opportunity to move your organization forward to confront new challenges. A gathering of executives, experts, visionaries, and progressive thinkers across Insurance, Banking, Wealth Management and Fintech gather for this one-and-a-half day of inspiring presentations, workshops and discussion that will help drive real world change.
The Designer’s Oath is a tool that helps multidisciplinary teams define the ethical guidelines of their engagements. Designers are responsible for creating more than ever before and with this increased influence, we must take a step back and recognize the responsibility we have to those we design for.