How to Define Quality of Service for Meetings

How to Define Quality of Service for Meetings

Organizations and collaboration vendors need to embrace Quality of Service for Meetings. This three-tiered approach offers an innovative approach to measuring meeting success.

Work that one does for others requires Quality of Service (QoS) metrics. As I have learned since writing Management by Design, most people don’t’ think about designing activities considered mundane or routine. I still receive fascinated looks (now on video conferences) when I talk about designing a strategic planning process or designing the approach to internal communications. 

Do you offer a Quality of Service for meetings? You should.

The biggest ah-ha moment comes from design discussions about the most common people process in any organization: the meeting. While many automated processes chug along with near astronomical rates of throughput, the meeting remains human-centered—one that usually isn’t very well designed.

Over the years I have offered several posts that focus on meeting design, but recent briefings with collaboration vendors raise an important issue that I have not yet addressed.

Read other Serious Insights posts on meeting design

My concern was reinforced recently when Cisco briefed me on their latest Webex feature enhancements and pricing plans (October 2020). The theme of the briefing was “10X Better Experiences.”  Unfortunately, the tagline was more marketing than metrics. The presentation went into several interesting Webex updates but did not cover just how the company planned to measure experience improvement. I kept asking myself, 10-times better than what? And, how are they measuring better? Against what baseline?

In response, I developed a three-tiered approach to meeting measurement. Each tier becomes progressively more difficult to measure. Keep in mind that measurement requires its own processes—which themselves require monitoring and improvement–and the data takes time to gather and assess.

The following sections outline the three measurement tiers with some suggestions on the mechanisms for measurement, followed by additional commentary on the approach.

QoS for meetings tier 1: Counts

Counting or accumulating quantitative metrics is both easy and easily automated.  Quantitative performance measures appeal to collaboration vendors because they offer a way to create dashboards with cool charts. Unfortunately, this quantitative data is more about operations than execution, more about license justification than meeting performance.

Some of these count metrics include:

  • Number of meetings
  • Number of meeting minutes (total, per meeting in drill down, average)
  • Number of attendees (total, per meeting in drill down, average)
  • Device used
  • Browser used
  • Differential between meeting invitations and meeting attendance (actual names and ratios)
  • Action items
  • Decisions made

None of these measures informs managers if the meetings we successful, or if they were necessary. All they say is that the meetings happened, people attended, and bits moved. Organizations need to consider more qualitative measures in order to understand the first order of meeting success: perceptions of succcess.

QoS for meetings tier 2: Perceptions

Perceptions require input from meeting participants. Surveys often follow a meetings. But because time is precious, most perception questions offer only one or two questions, and those often focus on video or sound quality. For physical meetings, most organizations gather very little data. Those who want to improve participant perceptions of meetings need to understand the thoughts and feelings of the participants. And that requires more questions.

Portrait of a businesswoman presenting something on a meeting

Perceptual tier questions to assess meeting success include:

  • The meeting started on time
  • The meeting ended on time
  • The meeting followed an agenda 
  • The meeting met my expectations
  • How often did the meeting stray off-topic
  • It was not necessary for me to be in the meeting
  • I did not feel this meeting was necessary
  • The meeting did not generate any value
  • Respectful discussion
  • Were the decisions in the meeting
  • Participant inclusion, including remote attendees
  • Were attendees prepared
  • Did people post content related to their assignments to a shared workspace ahead of the meeting? How long before?
  • Did anyone decline the meeting after accepting it? Did anyone ignore the invitations altogether? If someone declined the meeting, either after accepting it or as an initial action, how long before the meeting did he or she click on decline?
  • The collaboration tools we used added value to the meeting (if there are a variety of features, call out the features and evaluate each one separately.)

Meeting tools don’t routinely gather meeting performance perceptions. Organizations should consider survey tools to supplement the anemic surveys in most collaboration tools.

Some items on the list, such as remote participant attendee participation may lend themselves to automation. Machine learning (ML) should be able to sense a meaningful stream of audio coming from a remote attendee. Eventually, ML will likely be able to identify participants, as it does in, and offer metrics about how much each person participates.

Participation, however, should never be evaluated as a standalone metric. If the person who doesn’t participate also reports that he or she didn’t think they should be in the meeting, then there would be a correlation between their silence and their feelings related to the meeting.

Likewise, content and sharing are also easily measured using current collaboration tools should vendors or organizations choose to develop automation to examine preparation metrics. The number of files posts, access, annotation and updates to those files would represent engagement. Rather than thinking about content management for version control and compliance, it can also offer insights into meeting success when tied to a meeting (activities from particular group of meeting within a specific time parameter).

Beyond activity, intention offers important insight into success, however, defining intention requires significant thought for the metric to prove meaningful. If the intention of the meeting was to inform, then checking if the information shared was absorbed would validate the meeting. If the meeting intent included decisions, counting the decisions doesn’t actually reflect the quality of the meeting—because the act of deciding is only a count. Quality means an assessment of the goodness of the decision, its fitness for purpose. Perceptions of the goodness of a decision may be captured in a survey, but tracking the decision’s impact (see tier 3) is the only way to really understand if a meeting met its intended goals.

And then there are action items. Too often action items act as a proxy for meeting success. I see action items taken in meetings almost universally as a failure unless they are truly spontaneously generated by meeting attendees.

While it is easy to count action items there newness means no one has done anything with them yet. If they are the right action items or not, remains to be seen. I suggest adopting Scrum approaches to avoid giving out action items in meetings. Existing action items or tasks, however, should be reviewed in terms of obstacles to their completion. If no obstacles exist, the system tracking the action card will simply report status. Everyone who needs to know the status already knows the status, and therefore, they do not need to discuss on-track action items.

If an action item spontaneously emerges during a meeting, it should be captured on a card in the system so it becomes immediately visible, not to just the people in the meeting, but to all the project’s stakeholders. In that case, I also suggest assigning a tag to the action item that links it to the meeting in which it originated (or for collaboration tools to automatically do that). In that way, people can look back on the context for an action item, which will provide insight into the action item, along with making the case that the meeting that generated it or not was useful.

Perception-related questions are easy to ask, but they require time from participants, along with a willingness to be candid, in order to provide useful data to improve meeting design. 

The hard work in understanding meeting success comes from understanding outcomes.

QoS for meetings tier 3: Outcomes

The most difficult, and I will argue, the most important element of meeting metrics comes from outcomes. What did the meeting achieve? And did it achieve it in time?

This approach requires the formalization of meeting design, including capturing meeting metadata.

For instance, meeting data should include intent, expected outcomes, and the estimated time to outcome realization. To measure meeting outcomes, agendas need to become active. I suggest that agendas may be better as actions in a card system rather than an agenda published, as well, an agenda. I see no reason that Scrum cannot be applied to most meetings.

Quality of Service for Meetings formal meeting

Outcomes measurements may include:

  • Ideas generated that become revenue generators or cost savings
  • Introductions to people who end up building value together
  • Meaningful actions
  • Decisions that lead to meeting commitments
  • Does the meeting create any positive unintended consequences (serendipity)?

Meetings appear such fleeting things, but they anchor points than ephemeral activity. Meetings offer reflection, redirection, consolidation, perhaps gestation. Meetings often infuse perceptions of value for employees because their work gets acknowledged. And that is a fine reason for people to meet, as long as the meeting’s intent was to recognize performance. The outcome of a performance recognition meeting should be improved, or continued engagement from the people recognized? Does that happen? The worst outcome for a meeting to acknowledge performance? The person being honored not attending. When that happens, it is an embarrassment to the team leader, and in absentia, the person received the kudos. I have attended those poorly designed meetings. Dropping that item from the agenda would have been the right move.

To get completely meta, one outcome of tracking outcomes is tracking outcomes. If you don’t track outcomes, you cannot improve overall performance. Just doing it gets the ball rolling. Addressing perceptions helps align meeting design better with attendees, but improved perception scores don’t necessarily equate with actual performance improvement. Only when people connect meetings to outcomes do they viscerally recognize the value of meeting, even if it took place some time ago.

Quality of Service for Meetings: future tense

So why worry about a meeting being useful or not? Because patterns emerge that may detract from good management, productivity, civility, and collaboration. A manager, for instance, who regularly holds meetings that don’t meet expectations, fails to produce meaningful decisions, surprises people with action items, and generally offer low scores for both engagement and willingness to attend, represent an issue that needs to be addressed. No measurement, no way to correct issues.

The inverse also holds true. Without measurement, it becomes hard to determine good performance from poor performance until it is too late for meaningful corrective action. Annual, even monthly management perception surveys may point to an issue, but with dozens of meetings a month, morale already sank for those associated with poor performance–and for those who enjoy a good environment, the lessons learned take too long to propagate. Meetings take place in a moment, and those moments deserve immediate feedback so the next meeting, which may be no longer than 10 minutes later, can perhaps be a little bit better.

One of the big balancing issues (balance is the first principal in Management by Design) is time. Organizations must balance the time spent understanding meeting performance with the work of attendees. Meetings should also include built-in capabilities for sharing open work and results, linking to other relevant work, strategic context, and other design elements.

If the analysis results in increased productive work time, then the meeting metric analysis likely contributed in a positive way.

Measurement comes with design

For those who design meeting, measurement of meetings comes with the design approach. It may be hard to measure some things, especially those that will take time to realize. Meetings often live in The Serendipity Economy where actions today may lead to value in the future, and in ways that are not, on the surface, related to the initial actions. Serendipity requires perseverance and good record keeping to discover value derived from a meeting that took place weeks or months ago.

Other applications of Quality of Services for meetings

These ideas also apply to shared workspaces, to team rooms, to channels, and to other places people gather virtually to collaborate. A shared workspace is an asynchronous meeting, and it requires the same diligence, attention to detail, and design as a physical meeting—perhaps more so, because corrections often get identified and corrected slower in asynchronous work than in realtime work because people are not as engaged personally in the moment. 

Quality of Service for Meetings behind curtain

Measurement creates engagement. Rather than dashboards that help vendors justify investments, dashboards should focus on individual team and meeting performance, which will require vendors to think a lot more deeply about which measures they can derive from their data, how better to query customers about their perceptions of value, and how to deliver meeting insights in a meaningful way.

Commit to Quality of Service for meetings

A Quality for Service for meetings commitment offers attendees a transparent meeting framework. Depending on the QoS agreement, they can expect more co-creation, clearer objectives, and more feedback. Attendees should expect the meetings they attend will be the meetings they signed up for. Attendees should also know what is expected of them, and where they can share their content. A look at actions associated with the meeting should provide visibility into which discussions and decisions to prepare for, and what arrives as just information, tasks on track with no need for further action. Design should reflect intent, and execution should deliver the intended experience.

Meetings are an experience. Meetings are a service. Meeting attendees are customers. Your customers deserve a well-designed experience, a commitment to quality of service, and to be treated like customers by those who manage so much of their time in shared work experiences trying to get things done.

Did you enjoy reading about Quality of Service for meetings?

For more serious insights on collaboration click here.

Daniel W. Rasmus

Daniel W. Rasmus, Founder and Principal Analyst of Serious Insights, is an internationally recognized speaker on the future of work and education. He is the author of several books, including Listening to the Future and Management by Design.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.