November intentions

Curling, ambiguous knowledge work, and authenticity

Y’all, it’s curling season again, and I am just beside myself. After a 19-month pandemic hiatus, I’m curling in two leagues and having the best time. I love the same things I’ve always loved: the challenge of throwing rocks at just the right angle and speed; the immediate feedback when the rock comes to rest; team discussion of tactics and plan Bs for when things don’t go as planned, which is often.

But it feels so much more momentous to be curling this year. As an activity, it’s not without its covid risks — it is, after all, indoor exercise, and the virus seems to especially love the cold. But everyone in the building is double-vaxxed and with the new covid rules allowing just one sweeper, it’s not a sport of close contact. For me, the risk feels manageable and worth it, because curling is something I do for me, just because it brings me joy. There hasn’t been very much of that since March of 2020, and now it’s built into every week. I’m so appreciative of those few hours of pure fun, both for their own sake, and as a harbinger of normalcy.


Collaborating on Ambiguous Knowledge Work

Are you the sort of man who would put the poison into his own goblet or his enemy's? Now, a clever man would put the poison into his own goblet, because he would know that only a great fool would reach for what he was given. I am not a great fool, so I can clearly not choose the wine in front of you. But you must have known I was not a great fool, you would have counted on it, so I can clearly not choose the wine in front of me.

~Vizzini, demonstrating dizzying recursive self belief in The Princess Bride

Since the pandemic began, my experience with virtual collaboration has been decidedly mixed. It’s fine when the output can be precisely specified in advance — I try to deliberately over-communicate, and it seems to work out. The challenge comes with collaborating on something I’ll call Ambiguous Knowledge Work (AKW) — the messy job of synthesizing research, finding a creative solution to a problem, or designing a new program. The ambiguity in AKW is not what we’re making (it’s usually a PowerPoint deck, let’s be real), but in figuring out how to make abstract ideas relevant and resonant. At the outset, we can’t really describe what good looks like; there are many possible good outputs, but many bad ones too. Somehow, we need to get everyone working together to build the same thing, even when we don’t know what that thing is.

And sometimes, remote collaboration on AKW is great! We are using different tools, but the difference between remote and in-person is imperceptible. And then there are the Other Times, when AKW feels so much harder than doing the same tasks in-person. Specifically the collaboration is what break down. We are ostensibly working together, but it’s not multiple people contributing ideas and building on each others’ work. Instead, it devolves into one person driving, and everyone else relegated to ‘helping.’ It’s bad for everyone: the driver is over-burdened and holding too many things in their head, while the helpers are frustrated to not be contributing to the fullest of their capabilities.

This observation led me to the empirical work of psychologists Felix Warneken and Michael Tomasello1, where they identified the phenomenon of shared intentions as being a distinctly human ability. Shared intention “creates a shared space of common psychological ground that enables everything from collaborative activities with shared goals to human-style cooperative communication.” Warneken and Tomasello conducted experiments with toddlers and chimpanzees, to see how they would behave on cooperative tasks and games with adults. They found that toddlers are able to form shared, cooperative intentions — they don’t just help adults, but attempt to re-engage adults even when the adults stop pursuing the goals. By contrast, chimpanzees would help adults with a more limited set of goals (for instance, reaching an object), but didn’t display any ongoing joint commitment if the adult stopped engaging. The toddlers formed shared intentions, while the chimpanzees formed individualistic intentions.

Recursive Social Belief

What’s really interesting about shared intentionality is that it requires recursive social belief2. That is, not only do I need to believe that we have the same intention, I also need to believe that you believe that we both have the same intention. Knowing what our collaborators are paying attention to is critical to getting beyond helping on someone else’s goal and actually forming a shared intention.

In a typical real-life collaboration on AKW, we might stand around a whiteboard as a team. We can see everything written on the whiteboard, but we can also tell where our teammates are focusing their attention (and vice versa). If switch your gaze to the other side of the board, you are conveying valuable information about your own thought processes, and you also know that I am receiving that information. That recursive knowledge of other people’s focal points is doing really important work of keeping us aligned, and keeping our intention truly shared.

Most of the academic work on shared intentions is focused on how it develops in early childhood, which is fascinating but not so helpful for adults in a work context. What was applicable is a study of shared intentions among pairs of computer programmers collaborating on AKW3, undertaken by Josh Tenenberg, Wolff-Michael Roth, and David Socha. They found that in-person programmers are “continuously doing alignment.” Each programmer monitors his partner’s work, but also monitors his partner’s monitoring and thereby achieves recursive social belief. They also found that co-located programmers are nevertheless able to work in silence for stretches — the actions they take on the screen do the work of communication and turn-taking4.

By contrast, when pair programmers work remotely, they don’t work in silence, but rather narrate their actions. (“I’m going to copy this line over here…”). We can look at this narration as both a recognition of and compensation for the loss of recursive social belief. The problem with narration as a solution is that it’s unidirectional: the listeners know where the narrator is focusing attention, but the narrator is not receiving any information about the listeners’ attention. In fact, you could imagine the narrator getting a false sense of recursive social belief: they assume their teammates are focusing on the things they’re talking about, even when it’s not the case. That doesn’t require the listeners to be distracted or multi-tasking. They could simply be paying attention to different lines of code on the screen, and thereby forming a subtly different intention.

In the specific context of pair programming (even remotely), any lack of alignment is quickly identified and repaired, because all of the work is done synchronously on a shared screen. But for most remote collaborative AKW, we don’t sit side-by-side with a shared work product. Typically, we meet virtually to align, go our separate ways to do individual work, and then reconvene—only to discover that whoops! We weren’t as aligned as we thought. We didn’t maintain shared intentionality, and we’re back to someone driving an individual intention with teammates helping.

The Right Tools for Ambiguous Knowledge Work

One implication here is that doing AKW is a good reason choose in-person meetings over remote. You should continue to collaborate in-person until most of the ambiguity has been resolved, and you can go back to working remotely on well-specified tasks.5 But if AKW must be done remotely, then we should change the default tools we use. Based on the pair programming research, I’ve formulated some principles I’m planning to follow the next time I’m collaborating virtually on AKW:

  1. Name the problem, discussing the necessity of recursive social belief, and the challenge of knowing where everyone’s attention is focused, so we all know what we’re grappling with.

  2. Stop using video conferencing platforms for AKW. Video-based tools can’t create recursive social belief because you never know where other people are focusing their attention. The built-in screen-sharing functions are also unhelpful because they’re unidirectional, so you only get information about the presenter’s intention. Instead, use audio-only calls, and collaboration tools that let everyone edit simultaneously.

  3. Use a collaboration tool with telepointers. The telepointers are especially important because they let you see the location of your collaborators’ cursors while you are all working in the same file. It’s an imperfect proxy for attention, but it’s a big improvement on having no indication at all.

  4. Do more AKW synchronously in those collaboration tools. I’ve avoided this in the past because watching someone type is a poor spectator sport. I now think it was misguided to frame it as watching — it is turn-taking. Everyone contributes by typing their ideas in real-time, not unlike how everyone has a marker if we’re doing AKW in the same room. It’s a shift from using technology to talk about the work, to using technology to actually do the work.

  5. Shorten the time between meetings, when we’re not working synchronously. I often default to checking in every two days or so, but that seems like an impossibly long time compared to the continuous alignment that pair programmers do. I’m excited by features like Slack’s huddle, which make it easier to do that continuous alignment work without formal scheduling.

  6. Invest in screen real estate. One challenge I have with online collaboration tools is that it’s hard to get the resolution right. In real life, it’s easy to stand so that you can read the whiteboard and see where others are focusing their attention. Once you move digital, if you’re zoomed out enough to see where everyone’s telepointers are, you often can’t read the text. This is especially true on a laptop screen, so I’ll be purchasing a 27” HD monitor in Black Friday sales.

I’m really curious to see how my next remote AKW collaboration unfolds, with benefit of these principles. If you try some of them, or have ones to add, please share!


What I’m Working On

Last month I participated in a virtual panel discussion at RBC on the topic of “Leading with Authenticity.” One big theme of the panel was how much authenticity relies on knowing yourself and your values, so that you can bring them to bear no matter the situation. Meanwhile, I’ve begun diving deeper into self-determination theory (SDT) by reading Deci and Ryan’s 2017 review of the framework. (It’s really good, but it’s also 650 pages of academic writing. With luck, I will finish by Christmas.)

One of the questions I was asked on the panel was how organizations and leaders can support authenticity. I wasn’t far enough into this book to use its constructs and language in my answer, but I think I’d sharpen my response thusly: To support authenticity in the workplace, leaders need to help people integrate and internalize the organization’s demands into their own beliefs and sense of self. The more people can do that integration and internalization, the greater their feelings of authenticity and autonomy. And yes, that’s great for employee well-being, but SDT posits it also helps employees be more persistent, strive for higher quality, and achieve more effective results6. In other words, authenticity is good for business.


November is very much not my favourite month. In my mind, it is eternally grey and drizzly, without any redeeming features like snow or holidays. But yesterday we had a gloriously warm and sunny day, such that it was impossible to feel glum7. Even if the weather doesn’t cooperate where you are, I wish you all similar moments of rejuvenation and optimism over the month ahead.

In comradeship,

S.

1

Warneken and Tomasello have published widely on the topic, with a number of different co-authors. The three I read were: Altruistic Helping in Human Infants and Young Chimpanzees, Helping and Cooperation at 14 Months of Age, and Shared Intentionality.

2

This coinage is from Tenenberg, Roth, and Socha’s paper, “From I-Awareness to We-Awareness in CSCW

3

In pair programming, each individual has their own keyboard and mouse, but they share a single screen, so they are both working on the same document. In the Tenenberg et al., study, they conducted ethnography at a software company where different duos would pair up for tasks for about two hours at a time.

4

This is called a perceptual gestalt — we are able to collaborate through actions alone, and the collaboration literally ‘goes without saying.’

5

This is one reason I disavow the work-X-days-a-week approach to hybrid work. Some teams may need to spend two consecutive weeks in office while they hash out a thorny problem, and then will be more productive working remotely for the next three months while they do well-specified implementation tasks.

6

Organismic Integration Theory Proposition IV, if you are scoring at home.

7

I also curled *checks notes* three times this weekend, so that might have something to do with it too.

Reconciling October

So-called "residential schools," machine learning, and prosocial motivation

Here in Canada, the last day of September marked the first National Day of Truth and Reconciliation, to commemorate “tragic and painful history and ongoing impacts of ‘residential schools.’” My kids wore orange to school, and we had a lot of books and discussion about what these institutions were and why they were wrong.

It’s remarkable how, in explaining these hard concepts to 3- and 5-year olds, it becomes immediately clear that the label “schools” is just the wrong thing to focus on. Sure, my children are discomfited by the haircutting and physical abuse and insufficient food. But all of their questions and concern and indignation is rooted in the most horrific aspect: children, as young as four, taken away, by force, from their mothers and fathers. They immediately see that the “school” part is almost beside the point. The forced separation of parents and children was cruel and inhumane, and would have been wrong even if the children at these institutions didn’t die at horrifying rates1.

Labels matter, and the word “school” has powerful framing effects. In calling them “residential schools” we endow a profound moral wrong with undeserved positive connotations like growth, learning, expanding horizons. Using the word “school,” allows us to sugarcoat these institutions as places of good intention where bad things happened. It allows us to ignore the true extent of the atrocity and avoid accountability.

What we call “residential schools” were nothing less than state-sponsored mass abductions. None of this is to minimize the appalling treatment the children suffered while at these institutions, or their explicit objective to eradicate Indigenous language and culture. But persisting in the “school” label occludes both the historic wrong and the ways we continue to inflict the same injustice on Indigenous parents and children. By adopting language that centres the forced separation of children and parents, we make it harder to persist in comfortable ignorance, and lay bare a systemic human rights violation that was perpetrated on Indigenous people for more than a century.


How does machine learning shape motivation at work?

Over the past month, I’ve been thinking a lot about the interplay of machine learning2 and organizational behaviour. Up to now, most workplace machine learning applications make what I think of as passive predictions. These are use cases like automatic transcription where the machine learning algorithm is helping you out, but you as the human are the arbiter of right and wrong. The machine learning prediction isn’t prompting you to any particular action, aside from perhaps correcting mistakes. These passive prediction systems are basically benign and shouldn’t be very different from a non-AI software system.

On the other hand, we have what I think of as prompting predictions. We see these a lot in consumer applications — your streaming service recommends a show you might like, your maps app suggests a faster route given current traffic, your social media feed surfaces posts it thinks you will engage with. These kinds of predictions are meant to prompt you to action, and the actions that you take (or don’t) are fed back into the algorithm, and you usually don’t know if the machine learning prediction was “right.” As these kinds of prompting predictions become increasingly widespread in enterprise applications, I think we’ll see some interesting (pernicious?) effects on organizational behaviour.

To implement machine learning, organizations will need to quantify a lot of things that previously were only defined qualitatively. My hypothesis is that quantification will have some unhelpful knock-on effects for employee motivation, and we should anticipate and design for those when we implement machine learning.

Share

Quantification and employee motivation

In addition to whatever extrinsic rewards they provide, organizations are very reliant on employees’ intrinsic motivation to do great work. Edward Deci and Richard Ryan’s self-determination theory (SDT) says that intrinsic motivation is driven by our desire to satisfy three needs:

  1. competence — controlling outcomes and experiencing mastery

  2. autonomy — choosing actions that accord with your personal values and sense of self

  3. relatedness — feeling connected to others and supported in pursuits

As best we can tell, quantification is helpful for motivation. Most attempts and gamification quantify some kind of score, and we use step trackers to quantify physical activity as steps, because they motivate us to be more physically active. Per SDT, it works because quantifying steps is a proxy for competence — I can see my progress more easily, which reinforces my sense that I am doing well at the task or goal. However, the interaction with autonomy and relatedness is less clear. Quantifying steps can make it easier for me to control my own path towards a goal (autonomy), and meaningfully convey my progress to others (relatedness). But it might also make me feel like I am ceding autonomy to the step tracker, or engender feelings of competition rather than connection and support.

Quantification, in other words, tends to create a very specific kind of motivation: one that over-indexes on competence, as measured by progress on a narrow set of metrics. That’s problematic for organizations, as employees tend to develop tunnel vision for the numbers and lose sight of bigger-picture goals3. For exactly that reason, organizations put a lot of thought and care into choosing the right things to measure for traditional metrics and KPIs. By carefully selecting metrics with a view to what behaviours are incentivized, organizations can mitigate the risk of behaviours that make sense for an individual pursuing a goal, even as they’re counterproductive for the organization as a whole. Unfortunately, that paradigm breaks down when it comes to machine learning.

Machine learning is quantification on steroids

When an organization implements machine learning, the first step is choosing which features of the world to use and figuring out a way to represent them as numbers. This is not straightforward! Consider something as simple as the concept of “hats” — it’s actually encoding a lot of rich, qualitative aspects of the world that are very intuitive at a human level. Even a toddler knows that some hats keep you warm in winter and others keep you cool in summer, or that if you put your underwear on your head, it’s now a hat.

But when we capture “hats” for a machine learning model, we need to get very explicit and make choices about which features of “hatness” to represent and exactly how to quantify them as numbers. We generally try to include as many features as possible—the idea is that the machine learning algorithm is better than people at figuring out what’s important, so we should give the algorithm everything and let it decide. Our choices for numerical representation end up being driven by whatever produces the best prediction accuracy on an existing data set.

The resulting numerical representation will have a lot of numbers, but it’s almost a given that we won’t have thought of everything, and some feature of hatness will have been lost. Moreover, spurious features unrelated to hatness might get added in4. Because the numerical representation is basically impenetrable to humans, we don’t really know what features are captured or how much they affect the machine learning prediction. What we end up with is a black box and not even the person who built it really understands how the inputs relate to the predictions.

With that black box, we’ve effectively created a game for employees to play. Through the lens of SDT, the implications are clear. Employees will expend a lot of energy trying to figure out how their actions affect the algorithm, and even more energy trying to maximize their own outcomes5 against whatever features the machine learning algorithm thinks is important. Employees will be driven by their need for competence (I want to score as well as possible) and their need for autonomy (I don’t want the algorithm to control me). And they will likely engage in all kinds of spurious behaviour that is counter-productive for the organization.

The normal way we solve for misaligned incentives when we’re creating metrics is to refine the data we measure. But if humans could figure out how to refine the data, then the model and problem space probably isn’t complex enough to require machine learning in the first place. To address these challenges in a machine learning context, we can’t rely on just swapping out different self-interested goals. Instead, we need to shift employees’ motivations away from narrow self interest, and inculcate their motivation to serve group interests. The psychological literature on prosocial motivation gives us a good starting point for how we might do that.

Integrating prosocial motivation with machine learning

Prosocial motivation is when people deliberately engage in behaviours that benefit others. There are a number of intriguing possibilities6 in this space, but I’d like to focus on an idea called autonomy support, because it ties together both autonomy and relatedness, which are not as well-served by quantification.

Autonomy support7 is a concept that is completely independent of technology and machine learning. The theory comes from studying interpersonal relationships between employees and their supervisors, parents and children, teachers and students. A relationship is autonomy supportive if the person in authority does the following:

  • Provides a good rationale for a request (“You’ll be late for school if you don’t get ready now.”)

  • Allows the individual some choice in completing the request (“What do you want to do first? Brush your teeth or get dressed?”)

  • Conveys confidence in the individual’s abilities (“You do such a great job getting dressed all by yourself.”)

  • Acknowledges the individual’s feelings towards the request (“You’re really having fun with your trucks and you don’t want to stop playing.”)

In empirical studies, psychologists train supervisors/parents/teachers to be autonomy supportive and then measure outcomes. When you create an autonomy-supportive environment, prosocial motivation increases. Specifically in the workplace, we also see that autonomy support increases trust in organizations, acceptance of organizational change, and overall work satisfaction.8 So how might we implement some of those principles in a machine learning context?

Autonomy-supportive machine learning

The need to provide a rationale speaks to the importance of interpretability in machine learning models. There is some work happening on visualizing machine learning models, so we can more clearly see why a model is making a certain prediction9. To date, a lot of this work is driven by accuracy concerns — the visualizations help us identify instances where the model has learned the wrong things and might make harmful predictions. But autonomy support suggests that interpretability also helps employees apply machine learning outputs in the context of the organization’s holistic goals.

Allowing individuals some choice implies that we should avoid algorithms that return a single prediction. Rather, we should prefer to give users a list of likely possibilities and let them use their judgement. Suppose I am using machine learning to predict likely buyers for my sales team. I shouldn’t give my sales team output that says, “here is your most likely prospect.” Rather, I should output a list of high-potential prospects and let the salesperson choose which ones to pursue. Now, my employee retains some choice (and we’ve also built in a sanity check on the model’s accuracy).

This notion of giving people a list of choices implies tacit confidence in human abilities: not only am I giving you choices, but I think there is value in your discernment, over and above what the statistical models can tell us. I’ve really been influenced by Josh Lovejoy’s writing on human:AI collaboration. He argues that there are capabilities that are uniquely human, and capabilities that are uniquely AI, and we need to design products that make deliberate choices about how humans and machines will collaborate for a given problem space. If we can do that, then confidence in uniquely human abilities should be baked into the cake.

At the last, I’m skeptical about embedding empathy directly into machine learning. I suspect there is something about a computerized acknowledgement of feelings that will fall short of the mark. But I also think it would be a mistake to focus on embedding autonomy support solely into the machine learning applications. If the broader human environment is autonomy supportive, that will further bolster prosocial motivation, and help keep people focused on the big picture of the group’s goals.


What I’m Working On

I was delighted to give a guest lecture to a Digital Nova Scotia program last month, where I introduced some of my ideas on machine learning and prosocial innovation. I’m really grateful to all the participants for their questions and comments, which were so helpful to me as I refined my thinking on this topic.

I’ve also been collaborating with Rosie Yeung at Changing Lenses on a workshop we call “Radically Reimagined Recruiting.” In the workshop, we explore the meritocracy myth and the limitations of current recruiting practices, and then used human-centred design to prototype more inclusive approaches to screening job applicants. The goal is to reframe the task in terms of applicants’ present and future capabilities, rather than past education and experience. We run a version with generic job description, which has gotten a really positive reception from conferences and communities of practice. Next up, we’re hoping to run it with organizations who want to use the workshop to redesign their approach to screening, using a live job posting. Like so many HR processes, recruiting is really amenable to small-scale experiments that let organizations test out new approaches in low-risk ways. The workshop provides helpful scaffolding for making practical change.


Putting residential schools and machine learning into the same newsletter was a lot! Thanks for sticking with it…maybe next month I’ll just curate pictures of kittens. As always, I appreciate your time, and hope you found something interesting and useful to take away to your own world.

In comradeship,

S.

1

Mortality rates at “residential schools” were unspeakably high. In 1907, the Chief Medical Officer found that roughly 25% of all children attending “residential schools” died of TB, and that it was caused by bad ventilation and poor standards of care . This was widely known at the time — a 1907 headline from an Ottawa newspaper read “Schools Aid White Plague —Startling Death Rolls Revealed Among Indians — Absolute Inattention to the Bare Necessities of Health”

2

People can get pretty tied up in the semantics of what it means for an algorithm to be intelligent, whether something is machine learning or predictive analytics or whatever. I’m using the term machine learning quite broadly here, to denote any algorithm that uses large data sets to learn and then make new predictions about cases it hasn’t seen before. It shouldn’t matter what kind of machine learning approach you’re using, although if we’re talking about simple linear regressions, then I think most of the concepts I’m talking about here won’t apply.

3

It can be problematic for individuals too. My activity tracker seems to misinterpret laundry folding as pretty vigorous walking, which inevitably leads to less time walking and more time sitting on my couch, watching Netflix and folding clothes.

4

For instance, if I take lots of pictures of my children in hats because I think it’s adorable, a machine learning algorithm trained on my photos might come to think that the concept of hatness is related to the concept of children.

5

Often, employees can affect input data in meaningful ways. If you’ve ever had a customer service rep impress upon you the importance of giving them a 10/10 on the satisfaction survey, you’ve experienced an example of this.

6

Adam Grant (yes, that one) writes a lot about relational job design and how fostering connections between workers and beneficiaries of their work has positive effects on prosocial motivation. There are also some interesting theories on collectivist norms that seem applicable. But, well, this newsletter is already more than long enough.

7

I took a lot away from Marylène Gagné’s oft-cited 2003 article.

8

Basically, creating an autonomy supportive environment is a good idea irrespective of machine learning.

9

If you’re interested in the ML nitty gritty, this 12-minute video has some neat visualizations of generative additive models for pneumonia treatment. I also appreciated the “Transparency” chapter in Brian Christian’s book, The Alignment Problem.

September zest and ambition

Beach trips, social brawn hypothesis, and virtual reality.

Anne had the golden summer of her life as far as freedom and frolic went. She walked, rowed, berried, and dreamed to her heart’s content; and when September came she was bright-eyed and alert, with a heart full of ambition and zest once more.

~Anne of Green Gables

I hope you had a golden summer. I did. Swims, picnics, bike rides, skinned knees, and an epic trip to New Brunswick to catch up on a year’s worth of missed grandparent time. In New Brunswick, we had one especially magical evening. After a full day of work, we raced to the beach at 5:30, swam in the ocean in the warm of the evening, walked the beach while kids dug in the sand and the sun set.

After, we went to the wharf for a late supper, where we indulged in fried fish while my children somehow attained the Platonic ideal of child restaurant behaviour1. As we left, the boys marvelled at the moon and asked to see how black the water looked — one thing they learned this summer is that the ocean reflects the colour of the sky.

Work-from-anywhere flexibility affords these kinds of wonderful moments. But it also meant slogging through several weeks where work felt like drudgery, with every cue around me screaming, “You should be on vacation!” I am too much a creature of routine and habit to want to work from far-flung locales more than occasionally. But even if I didn’t get freedom and frolic to my heart’s content, two small boys did. And I wouldn’t trade our night at the beach for anything.

Meanwhile, the realities of September have settled in. One boy started preschool on Tuesday and I’ve just returned from a somewhat tearful drop-off at the first day of senior kindergarten. Not so long ago, we all imagined that September would mark a return to some actual face-to-face working. Delta had other ideas, and so we face down another pandemic wave and another season of virtual meetings. I confess this has not done wonders for my feelings of zest and ambition, but at least it provides a convenient pretext for a deep dive on the causes and cures for so-called ‘Zoom fatigue.’


Is ‘Zoom Fatigue’ Just About the Meetings?

It’s curious that we have ascribed the (undeniable!) fatigue of remote work specifically to video meetings. There are two theories as to why video meetings are more tiring—the scheduling and self-presentation hypotheses. I think both are basically correct, but also incomplete. There are so many other things besides meetings that go into getting work done. Is it really just the meetings leaving us all so tired?

The Scheduling Hypothesis

The scheduling explanation for zoom fatigue rests on how virtual meetings allow us to schedule back-to-back meetings with much greater intensity. In the before times, even the busiest calendars afforded small breaks to move between meeting rooms, walk someone to the door, get the projector to connect. But now we sit in the same chair and stare at the same screen, moving between virtual meetings with just a few clicks.

Earlier this year, Microsoft put out some interesting research around the effect of back-to-back meetings. They had volunteers participate in video meetings while hooked up to EEGs that monitored their brain activity. On one day, they attended four back-to-back 30-minute meetings, while on the other they did ten minutes of app-facilitated meditation between meetings. The results showed that with the ten minutes of meditation, beta waves associated with stress did not build up over the course of the meetings. Ten-minute breaks also increased frontal alpha asymmetry, which is associated with being more engaged in meetings.

“Take breaks,” is good advice, but gosh is it hard to execute. There’s such a strong temptation to squeeze one more meeting into calendars, or to let the meeting run long because you don’t have a hard stop. Even when you succeed in keeping breaks in your calendar, it takes tremendous discipline to stop yourself from filling those ten minutes with emails or instant messages.

Rather than leave it up to individuals, I wonder if it might be better to facilitate those breaks within the meetings themselves. If I was running an hour-long internal meeting, I think I’d try to interrupt it halfway through: “Okay, everyone: mute buttons on, cameras off, and stand up and walk away from your computer and phone for the next ten minutes.” Just as it’s easier to work hard in a group fitness class, I think group breaks would be easier to stick to.

The Self Presentation Hypothesis

Self presentation is the idea that most of us want to present a positive image of ourselves to the world, but projecting this image takes significant mental energy. Video meetings heighten our self presentation efforts, partly because we can see our own image (at least by default), and partly because video meetings subject us to the eye gaze of others for much longer and at much closer range.

A recent study explored the self presentation hypothesis. Over a four-week period, workers were randomly assigned to keep their cameras off for the first two weeks and on for the second, or vice versa. Each day, study participants reported on how fatigued they felt. The researchers found that having cameras on for virtual meetings is more fatiguing, irrespective of the number or duration of the virtual meetings. The effect was more pronounced for women and for newer employees with less tenure in the organization.

The study also explored whether there were benefits in terms of feeling engaged or having voice in meetings. The results here were less clear-cut, but the additional fatigue from cameras on seemed to negatively affect both engagement and voice. But the researchers caution against over-generalizing these findings, since they studied the effects on the person turning the camera on or off, not the experiences of other individuals in the meetings.

For me, this research makes it clear that there are costs associated with having video meetings. But my hunch is that for some meetings, those costs are completely worth it. I’ve been participating in an online group2 whose cameras-on policy aligns with their goal of creating community, rather than just hosting a webinar series. I think it would be hard for them to achieve their community goals without cameras, and I believe I enjoy the meetings more because of the cameras being on. On the flip side, my favourite kind of pandemic interaction is a one-on-one phone chat while my interlocutor and I both walk— there is something incredibly rejuvenating in the combination of fresh air, exercise, and a voice-only conversation.

Contra the researchers in this study, I’m not sure the prescription is as simple as individual flexibility on the choice of cameras on or off. Rather, I think we need to develop a discipline of thinking carefully about the kind of interaction we’re striving for, and whether cameras will add more than they detract. I think the end result will be cameras used by all attendees for only those meetings where the video adds an important dimension. For most people, I expect that translates into video meetings a few times a week rather than a few times a day, and a corresponding reduction in fatigue.

Share

Remote fatigue beyond virtual meetings

I think it’s a good idea to be more selective when we choose video and to create a discipline of taking short breaks between or during virtual meetings. But I have an additional theory as to why remote work feels much more exhausting, courtesy of a webinar I attended earlier this year. Dr. Emma Cohen3 runs the Social Body Lab at Oxford, and she has published some new research on what she calls the Social Brawn Hypothesis. She finds that cues of social support drive increased performance without increases in perceived effort or fatigue. Stated another way, without social support, you feel you are exerting more effort to equal the level of performance you achieved with benefit of social support.

What really struck me is the definition of social support. It’s not just moral support (what Cohen calls “esteem support”). Rather, it’s specifically the combination of synchrony and the perception of actual resources available from your companions. To access social brawn, it’s not enough to have someone cheering you on — you need to feel like there is someone in lockstep who can jump in and provide tangible help.

I suspect that when we’re working remotely, we lose social brawn from our colleagues. The lack of synchrony in remote work is obvious, but I also think tangible help is less available. It’s still possible for you to tap your remote colleagues for support, but it’s much more difficult. Because you’re working remotely, you and your colleagues don’t have the same “mutual awareness4” of tasks or their context. It takes much more effort to communicate the required context when you’re restricted to leaner media.5 What’s really fascinating to me about Dr. Cohen’s research is that it’s not important whether you actually access that support from your colleagues. Being out of sync and knowing help from others isn’t easily available is a source of fatigue, in and of itself.

I don’t want to oversell this idea. Dr. Cohen’s empirical work is focused on exercise science, not cognitive work, and they see large differences between individuals that require further study. But I do think it suggests that even if we eliminated video meetings completely, remote work would still be more tiring, unless we can solve for social isolation and mutual awareness of goals and contexts.


What I’m Working On

I’ve spent the summer conducting research on institutional adoption of virtual reality (VR) for some specific workplace training contexts. The research itself is focused on a pretty narrow niche, but it’s prompted me to ponder the broader question of whether VR will be mainstream in workplaces in the near future.

I see a lot of signals in the marketplace that make me think the answer is ‘yes.’ The technology is advancing by leaps and bounds, getting both faster and cheaper. That should only accelerate as large organizations like the US Department of Defense6 and Facebook invest massively in both hardware and software. Network effects are important for VR, and I see early adopter organizations make VR headsets standard issue for employees. If those are one-offs, VR might fizzle. But I’m more inclined to think that VR will ultimately break through and become a standard work tool, much like a mobile phone or laptop.

I’ve come to believe that VR’s core value proposition is distraction-free immersion. Undivided attention has become the rarest and most ephemeral commodity in the workplace, and that’s only exacerbated by digitally-dominant remote work. VR can potentially be a mechanism to regain focus in training, in workshops, in meetings, and to overcome the tyranny of notifications and open tabs7. I suspect that VR might eventually supplant video calls, because it facilitates undivided attention and lessens some of the self presentation challenges of video calls. I’m less confident that VR can help with the synchrony and social support challenges, but if the technology improves enough (especially graphics and haptics), maybe VR could deliver mutual awareness and synchrony so we can take advantage of social brawn even when working remotely. And then we just need to figure out how to be disciplined about taking breaks.


Thanks for reading the first issue of the Workomics newsletter. I’d really love feedback on what you found it useful and interesting. Are there other topics you would be like to read about? If you know someone else who might appreciate the newsletter, I’d be grateful if you shared.

Share Workomics

Wishing you all zest and ambition for the month ahead.

Yours,

S.

1

They were quiet! They sat still! They ate all their food without complaint! Then after they finished eating, they sat quietly reading books while the adults finished. I don’t even know how it happened, but I suspect witchcraft.

2

It’s called the Design Thinking Zeal and it’s absolutely lovely. If you’re at all interested in human-centred design, you should consider joining.

3

I’m very grateful for Dr. Cohen’s generosity in follow-up email correspondence with me, which provided a lot of helpful food for thought. An extended version of the webinar I attended is online.

4

Tsedal Neeley talks about the concepts of mutual awareness and lean vs. rich media as they relate to remote collaboration in her book Remote Work Revolution.

5

A recent working paper shows that time in meetings decreased 11% during lockdown, while the number of internal emails increased. I think that speaks to the additional work to create context for colleagues.

6

The Microsoft headsets are actually augmented reality (AR). In AR, virtual elements get overlaid over the real world, whereas in VR, you are fully immersed in an entirely virtual world. I think the technology advances in AR should cross-pollinate to VR, but I don’t think people or organizations will adopt AR and VR in the same ways or for the same reasons.

7

Current Open Tab Count = 42.

Coming in September

Workomics will be launching after Labour Day as a free monthly newsletter. I hope it will feel like a newsy personal update, with interesting ideas and jokey asides. Please subscribe!

Loading more posts…