The Future of User Experience
In her article “Design x Futures = Design Futures?” design strategist and researcher Corina Angheloiu says: “The process of imagining the future is an active, values-laden social practice, which requires a layered approach to surface and challenge dominant patterns in our mental models.”
As UX professionals, we often preach to others about users’ mental models. According to Angheloiu, now is the time to challenge our own mental models of ourselves and our work. With the very definition of User Experience shifting and perhaps narrowing to make room for Customer Experience—a broader discipline that considers interactions that cross screen-based and real-world channels, products, services, and teams in accomplishing a single task—we are simultaneously realizing the long-promised future of the Internet of Things (IoT). There are so many more things to design than Web sites in a world where our watches, cars, doorbells, and thermostats are connected to the Internet. We need to shift our mental models to make room for these new types of interactions. Many digital interactions will be contactless in the future—that is, they will not require using a touchscreen—but instead use our voices, faces, and other biometrics. Plus, all of these interactions will likely be powered by AI and machine-learning algorithms. This means that big data sets—which many of us might not yet understand fully—will govern the products we design in the future. So, if AI plus data analytics is the bandwagon we need to jump on, how should we engage?
Designing apps that are powered by AI, monitoring and assistive devices and wearables, and augmented and virtual-reality (AR/VR) experiences will be new for many of us. But I’m confident that most of us can figure these things out. Since all of us generally fall into one of two categories, recent graduates and experienced UX professionals, we either have some recent academic experience with such new technologies or enough work experience to be able to adapt our process to handle these new, specialized technologies—just as we’ve adapted to other new technologies in the past. This isn’t the first time we’ve had to apply UX methodologies to new media. For instance, seasoned UX professionals can remember the time before the Web and mid-career professionals like me can remember the time before mobile apps. In fact, I recall the first time a prospective client asked me to design for mobile. They asked me how many mobile sites or apps I had designed and whether they could see some samples. I blurted out something like, “None yet, but the process is the same.” I didn’t get that job, but landed plenty of others and eventually became an expert in mobile design because I was right: the process is the same no matter what you’re designing. Collectively, we’ve weathered these shifts in our career expectations and now, according to the NN/g survey, 76% of us are designing for mobile. We’ll manage the next shift as well.
There is an important role for UX designers and researchers to play in designing for AI. The tricky part—the part for which formal education, a community of peers, and mentoring would likely be essential for most designers—is two-fold: understanding the data analysis and analytics behind the AI algorithms and the impacts of designing with big data. The first issue is very tangible, while the second can feel nebulous. However, formal curricula that are dedicated to producing well-rounded humans who are researching and designing on behalf of other humans can and should address designing for both AI and big data.
Formal Education in Data Analysis and Ethics
If you seek them out, you can find many undergraduate, graduate, and post-graduate programs and certificate programs that teach the basics of data analytics and ethics, as well as AI and machine learning—the methods by which machines turn big data sets into patterns. If you’re still in school, yes, you should take at least one of these courses.
If you’re a working UX professional and are interested in a self-paced program, check out Coursera, edX, Lynda.com, or Udemy for courses on the fundamentals of AI and machine learning. CodeSpaces has created a list of the “Top 10 Artificial Intelligence Courses, Certifications, & Classes Online [2021].” CIO.com went one step further and offers a list of “The Top 11 Big Data and Data Analytics Certifications.” If you’re interested in a formal, post-baccalaureate degree program, look no further than your local Google search. Social media serves me ads daily for these and other programs that are being offered everywhere—from my state college system to various private colleges’ continuing education and executive education departments around the country.
I can’t say which of these programs is the best. You should choose a program based on your own interests and the time and financial investment each would require. But be sure you choose one that explains how big data sets are gathered and analyzed. Because most UX professionals conduct qualitative research, we might not be as familiar with quantitative data techniques, issues such as the dangers of bots filling in survey questions, and methods for data imputation—that is, skipping versus filling in missing data. It’s also important that we understand the differences between custom data, which are commissioned data sets that capture information relevant to a specific product, and synthetic data, which are data based on demographic assumptions that might be faulty or inherently biased, and why the former are better than the latter. It’s also imperative to understand the dangers of data sets that are purchased from tech giants such as Amazon, Google, or Microsoft.
Once you get a handle on the technical aspects of big data, it’s vital that you read at least one chapter of a book or take at least one lesson on data ethics. My deeply held personal and professional opinion: it’s time for the practice of User Experience to take responsibility for the impact of our design work and lead the charge on ethics—including issues relating to both sustainability and diversity, equity, and inclusion (DEI). (For the purposes of this article, I’ll stick to ethics and UX education. Watch for my column on UXmatters, starting later in 2021, for more on DEI and sustainability.) Ethics in AI is an emerging topic of concern, as well as a job role in itself. We should not just wade into this dilemma as Harvard Business Review refers to it; we should be leading the charge. Writing for HBR, Andrew Burt says, “Every AI principle an organization adopts … should also have clear metrics that can be measured and monitored by engineers, data scientists, and legal personnel.” Just as, according to the NN/g survey, many UX professionals still wish we had coding skills, we should now be wishing that we knew more about what happens under the hood of AI and big data so we can make smart research, strategy, and design recommendations and also be a partner in decision-making about the measurement of AI’s impact.
This is not just my opinion—colleges across the US are developing programs that focus on AI and ethics at a rapid pace. Selecting the right program among these new programs could be a bit fraught—if you consider the ethical implications of your education in AI and ethics.
Massachusetts Institute of Technology (MIT) was one of the first to announce the founding of its College of Computing, with a mandate to focus on AI and machine learning. Stephen A. Schwarzman, the CEO and cofounder of Blackstone, a private-equity firm, founded MIT’s new College of Computing, and it will bear his name. Private-equity firms are notorious for dismantling the companies they claim to be saving. Let’s hope that Blackstone does not feed the AI algorithms that come out of MIT back into the company’s already dubious business processes—perhaps for the purpose of perpetuating existing inequities.
More recently, David Greene, the president of Colby College—a small liberal-arts college that is based in rural Maine—announced the founding of an AI institute, whose stated goal is teaching AI in the context of subjects such as “history, gender studies, and biology.” Andrew Davis, President of Davis Selected Advisors, an investment-management company, founded the institute, and it will bear his name. In a conversation with Marketplace’s Molly Wood, Greene said this about AI:
“I think that we need to have a whole cohort of students from different backgrounds and experiences who are really leading AI and not being led by it. So one of the beauties of people who are trained in the liberal arts is that they really understand how to come at a problem from multiple, different angles. They understand history in context. They understand how things play out over time, and not just the near-term impact of something, but what happens over a longer period. How do you look at that impact and understand and predict what might happen if you actually make this decision versus that decision? And right now, because things are so narrow, we’re missing much of that. And I think the more that we have people who are coming from liberal arts backgrounds, who are really raising the kind of questions that will ultimately shape AI in more positive ways, the better off we’ll be.”
I hope Colby achieves his goal and that the learnings from this institute won’t be used to power what Kathy O’Neil calls “Weapons of Math Destruction.” After all, Wall Street specifically and financial services generally are O’Neil’s biggest offenders, according to her discussion of algorithmic discrimination.
Indiana University is kicking off a pilot program this summer that starts even earlier in the education funnel. The program, AI Goes Rural, targets middle-school students in Indiana. Tina Closser, their science, technology, engineering, and math program coordinator said the program will incorporate ethics into its curriculum: “There’s a lot out there about the technical side of AI and how it works, but we will be talking about what AI does and how it affects [students’] lives.” One of the goals of the program is to create a STEM pipeline to the Department of the Defense, which is funding the program via the Naval Surface Warfare Center. As a parent, I wonder whether parents can opt out of this program on behalf of their kids. I also wonder whether the Trolley Problem should be a mandatory thought exercise in every middle school, high school, and college, starting yesterday.
Someone Has to Advocate for Ethics
Someone has to take on the role of advocating for ethical data analytics and AI. I believe that UX professionals are best positioned to fill this role. We should be challenging both people’s mental models of our work and our values. After all, if we are the voice of the user, we should be speaking up for the users we hear from in surveys and user interviews, the users we see during observational studies and usability testing, and the users we cannot see in the big data sets that are driving the algorithms and powering the applications that we’re designing. As David Greene has said, we should be leading AI, not being led by it.