Review and testing in practice across multiple devices

In our latest extract from our most recent ebook From Tool Selection to Measurement: 6 Steps to eLearning Authoring Success, we look into the review and testing process. With so many variables to test in user environments, user journeys (and more besides) some key areas sometimes slip through the cracks with problematic results. Read on to find out more.

Why do we need to review the courses that we create? The answer is surely obvious: to make sure there are no errors or omissions that affect the quality of our training. Nonetheless, and though it’s rarely by design, there’s no shortage of errors that make it through review and testing. Here are some common oversights that we encounter:

Failing to Test Different End-User Environments

From spurious error messages to courses just plain failing to load, your end-users’ environment can throw up unanticipated errors that result in a bad experience. However, it’s not just errors that can throw things off. You need to test whether different user environments result in different and potentially inferior experiences. Consider how the following could have an effect:

    • Device: In 2020, you can expect content to be viewed on all kinds of devices: desktops, laptops, tablets, iOS phones, Android phones and beyond. Modern content creation tools will use responsive design to handle your content on all of these devices. But you still need to check what the output actually looks like. Don’t just settle with ‘it works’ either—tweak the design to ensure that you’re getting the best user experience on each device.
    • Browser: These days, people are largely settled on a favored browser, be that Chrome, Safari, Firefox, Edge, or something else. This makes it easy to miss the fact that browsers can handle even web standards-compliant content in subtly different ways. Nonetheless, you need to test your content in all popular browsers to catch all of these errors—don’t just assume it will work because it works in the one you’re authoring in! And remember, there are multiple browsers in use on mobile as well.
    • Operating System: Mobile-wise, you should ideally test Android and iOS devices. Meanwhile, a good chunk of business machines run Windows. However, you probably just have to go for a walk in a design department to find someone on macOS. And chances are good that your technical teams are running some flavor of Linux.
    • LMS: We often see people reviewing via a test LMS or in their tool previews, but they then forget to test on the final LMS they’ll deliver the content on. Just as browsers sometimes treat web-standards in subtly different ways, SCORM and xAPI can be implemented slightly differently on different LMSs. For example, SCORM may require a character limit on a certain field. In one LMS, you may be given an error that flags the character issue, but in another, you may receive a confusing general message.

Creators need to be aware of the differences that different user environments create and check whether that still matches the intent of the content. Ask: does it still achieve the desired learning outcome? And is it more or less effective in certain formats?

What should be clear from this list is that it would be impractical to test for every permutation of device, browser, operating system, and LMS that will be used to access a course. The important thing is to make sure that you have a set of test devices that are representative of your end-user group. Use existing user data and/or survey your learners to find out what you need.

Failure to Test Multiple User Journeys

A common mistake that organizations make when testing is to individually check each screen of content. Once these all look correct, they’ll congratulate themselves on a job well done. However, this doesn’t account for all the different journeys and paths through the content that could possibly exist, leading to any one screen.

If you’re using any form of question logic, you’re often sending learners through diverging paths when they answer a question in different ways. It’s important to test each and every one of these paths to ensure that every learner gets the full experience you intend for them to have. Failure to test any user journey, or a very limited number of user journeys, can cause issues at very late stages of the course creation process.

Non-Technical Aspects That Are Surprisingly Easy to Forget

Faced with the complexity of the systems and learning designs that require review, it’s not uncommon for teams to gloss over one or more aspects of the actual content. While your robust device testing policy notices that a screen doesn’t load properly on mobile, you can still miss an obvious spelling mistake in the second paragraph.

Therefore, remember to get someone knowledgeable to review all aspects of your language use, including:

  • Spelling and grammar
  • Tone of voice
  • UK and US English (and dialectal differences in other languages)
  • Company style guidelines

Company style guidelines are one area that catch teams out. Projects can be ready to launch, and then brand compliance will step in and throw a spanner in the works. In some scenarios, a project may encompass 50 or more courses, all created at the same time. If the theme used across all of these courses is not brand-compliant, every project will need to be corrected.

In some platforms (including Gomo) you would at least be able to update a master theme file to roll out the correction. However, even this is dependent on you building the courses to use a unified theme. Furthermore, you still need to test that the change has happened on all courses—and that there are no obscure issues that have happened as a result.

Continue Reading

This blog post covers most of the first part of our review and testing chapter. In the remainder, we cover the importance of having an Subject Matter Expert check the accuracy of your material (even if it was authored by an SME) and how tool selection can streamline your QA process. We then dive into review and testing best practice—looking at the ideal size of your group, the advantages of staging and the ideal time to start your testing.

In the remaining chapters of the ebook, we also cover how to:

  • Define your needs and lay the groundwork before purchasing a new learning tool
  • Work to your budget and build content that secures budget increases
  • Introduce and effectively transition staff to a new tool
  • Create high-quality, effective learning content
  • Track the content you launch and use data to refine it

Download the ebook today.

The cover of the Gomo ebook, From Tool Selection to Measurement: 6 Steps to eLearning Authoring Success

Explore Gomo’s free 21-day day trial and experiment with our cloud-based review tools: click the button below to get started.

We use cookies to give you the best website experience possible, including integration with social media and relevant advertising tailored to you. To block these cookies please change your cookie settings, or to accept them simply click continue below. Read the full Privacy and Cookies Policy.