TTAC

Video Otoscopes White Paper


Selecting the “best” of any imaging device for a telehealth program can be a balancing act between adequate image quality, ease of use, and acceptable price.  Given that some of the cheapest otoscopes may cost 40 times less than their most expensive counterparts, it is more important than ever to spend the time and effort to perform a thorough evaluation of the products on the market.  This should include an assessment of the quality of imagery that the devices are capable of producing.  The following section of this toolkit will be focusing specifically on how to set up a testing environment to ensure a balanced, fair assessment of the otoscope market within your own organization.

A review of this toolkit’s Assessment Process section may help you consider the larger process of establishing minimum requirements, user profiles, and other relevant elements that play into the final purchasing decision.

 

The Goals of Testing

TTAC tested 11 different otoscopes in this evaluation.  After a short, initial hands-on session with the devices, it was clear to that the differences in image quality between the devices – and occasionally with the same device – demanded a rigorous and controlled testing environment. Color accuracy, blooming, depth of field, field of view, focal range, sharpness, and resolution varied widely. Additionally usability of devices included one-handed operation and ease of image capture were highly variable and included as key considerations for our review.

The evaluation consisted of three assessment phases:

  • Technical Assessment: TTAC captured images to provide images and measurement data useful for assessing the technical capabilities of each device and light source.
  • Clinical Assessment: ENT and Audiology clinical providers captured images of volunteer adult and pediatric subjects. These images were de-identified and reviewed for image quality and color accuracy, by specialists and primary care providers.
  • Usability: Devices were evaluated and ranked for overall ease of use and functionality by telemedicine end-users and end-user trainers in a hands-on assessment.

 

Technical Image Review

During the technical evaluation, images of test targets designed to demonstrate Field of View (FOV), Resolution and Clarity (RES), Color (Color), and Depth of Field (DOF) were captured. Clinical and technical reviewers were then asked to rate each image based on a 1-5 scale (with a score of 5 representing the best possible image). If a device could support multiple light sources, sets of Resolution, Color, and Depth of Field images were captured using each light source.

Several of the devices tested provided good results in the resolution and color categories, but rated lower in the field of view category (particularly in images captured with the pediatric specula). While other devices had a good field of view, they suffered from softer, less crisp images, and less accurate colors. With pediatric specula all devices performed poorly in the field of view category, an unsurprising result as pediatric specula limit a significant portion of the image surface.

It is of note that images captured from devices with rod-lens probes generally had a wide field of view, but also produced barrel distortion (sometimes called “fish-eye”) that distorts the image around the edges. This is primarily caused by the barrel lenses used in these devices. Structures are generally clear in the center of the lens, but less clear at the edges. This affect is common in many otoscope devices, and is very noticeable in the technical images, but is much less noticeable in the clinical images.

  • Technical Review Images:
    • Field of View
      • Accuchart Circles Fixed Distance- captured images of Accuchart Circles at 1.5 cm distance from lens using adult and pediatric specula
      • Accuchart Circles Variable Distance- captured images of Accuchart Circles where target fills horizontal field of view using adult and pediatric specula
        • Distance between lens tip and target surface was measured and recorded
    • Image Clarity
      • USAF 1951 Tri-Bar – captured the two inner-most sets of lines on Edmund Optics USAF 1951 Tri-Bar Resolution Test Target
      • Resolution Square – captured a 1 cm circle with a set of concentric squares printed inside, custom designed by Stewart Ferguson and Jay Brudzinski for the AFHCAN program’s otoscope assessment
      • Edmund Optics Resolution Chart (Lines) – captured the vertical lines on the Edmund Optics test target, with the image captured at a distance of 1.5 cm
    • Depth of Field
      • Practical Test-Mesh – capture image of a 1 cm circle of black nylon mesh on a cloth background at 1.5cm from lens
      • Lens Cal Test Target- captured images using the Spyder Lenscal test target which consists of a millimeter rule attached to a 45° surface. Images were captured from 1.5cm above surface to see markings that are clear above and below focal point.***
    • Color
      • Macbeth Color Chart 2×2- captured images from 1.5 cm above blue, red, pink, and green color squares of the Macbeth the color chart.

Technical Testing Considerations

There were a few practical considerations we wanted to share related to our technical imaging assessment. First, due to the small image sizes, it would have been impossible to collect consistent images while holding these devices by hand. TTAC constructed a test arm that secured the otoscope devices above the test targets at fixed distances. This enabled a more stable, and consistent image than could have captured by hand.

Second, while these technical tests are of value, in practice most programs looking to do their own assessment may want to focus more on clinical evaluation images. While technical imaging produces useful feedback, clinical imaging provides a practical benchmark of a device image quality and may satisfy the majority of a programs evaluation needs.

*** We have included this test set in our list of technical images. In practice, capturing consistent images with this test target proved problematic, and while interesting images were produced, they didn’t apply significant insight for our review. We include this information in this toolkit to show that sometimes tests sets may prove unfeasible, or may not provide valuable data.

 

Clinical Assessment

Reference images for this portion of the assessment were captured by ENT and Audiology providers. This allowed for more consistent images across devices, and providers were able to give feedback as they interacted with the hardware. Sets of Tympanic Membrane images using each device were captured from six volunteer (three adult, three pediatric) subjects. From these images two adult and two pediatric sets of images were selected for the review.

Images were de-identified and sent to Primary Care, ENT, and Audiology providers.  Providers who rated the otoscope images on 1-5 scale (with a score of 5 representing the best possible image). Images were rated in two categories: Image Quality, and Color Accuracy. Providers were also asked to indicate what they liked and disliked about each image, and if they felt the image was of acceptable diagnostic quality.

  • Clinical Review Images:
    • Tympanic Membrane Adult – an image of the right tympanic membrane and ear canal of adult subjects 1 and 2
    • Tympanic Membrane Pediatric – an image of the right tympanic membrane and ear canal of pediatric subjects 1 and 2

Below are two sets of images from the clinical review set for the Horus 3 and MCAM devices:

 

Usability Assessment

Usability testing was completed for two devices after feedback was collected from the Technical and Clinical image review. Usability testing requires significant time and coordination with device end-users. By narrowing down the list of candidate devices it becomes easier to do a thorough review of usability for viable devices.

For our review we captured feedback from five end users. Users were given chance to familiarize themselves with the devices including controls, cabling, specula, and fit and feel. Users then were asked to perform an exam with the device. Users were asked to rate the devices on a scale from 1-5 (with a score of 5 representing the best possible score) in the following categories:

  • Fit and Feel
    • Is the device comfortable to hold and operate?
    • Does the device feel like a quality medical device?
  • Durability
    • How well do you feel this device might hold up to daily clinical use?
  • One-handed operation
    • How well do you feel this device can be operated with one gloved hand?
    • Does it balance well?
    • Are the controls accessible and easily manipulated with one hand? Including:
      • Light operation
      • Focus operation
      • Image capture
    • Cleaning
      • How easily can this device be cleaned between clinical users?
    • Specula quality
      • How would you rate the specula for this device in terms of ease of use and quality?

The users were also strongly encouraged to provide comments on their impressions of each device. These were recorded in the documentation for the review. Finally users were asked which device they preferred in general from a usability standpoint. Below is a sample of the Usability Testing form we used for this evaluation.

Conducting the Evaluation

 

Technical Image Collection Details

Stabilizing the otoscope during image capture was important for the testing process in order to reduce the introduction of motion blur into the images. Technical images were captured with a tabletop test arm TTAC made for this evaluation. The arm has a wide base for stability, t-screws and knobs secure the arm. Devices were attached to the arm with clamps, or using zip-ties if the device shape made clamps unfeasible. A rubber protective layer placed between the probe assembly and clamps was used to avoid damaging or marring the device surfaces.

We conducted two image review sessions over a video call. Adobe Lightroom proved an effective tool for organizing, and presenting the images to reviewers. Ideally, this review would be done in person with all reviewers looking at exactly the same image on a large screen. We had to modify our review plan due to COVID-19 social distancing limitations and conduct a virtual review. This does introduce challenges as screen size, quality, and data transmission issues can produce variance in the image quality being assessed by each reviewer. While conducting this review virtually was more technically challenging we were able to gather viable feedback using this method.

The time it takes to do this sort of review should not be underestimated. It is important to review these images in the same session- so a longer single session is preferable to multiple shorter sessions. Depending on the number of images you would like to review and the number of reviewers you have, this process can take anywhere from two hours to a full day. Don’t short change your process and your reviewers by rushing through your review.

Clinical Image Collection Details

Clinical images were more difficult to physically stabilize due to the need to navigate in and around other parts of the subject’s body.  Mounting the otoscope to a rigid platform was not an option in these cases; a solid desk or chair was provided on which the imager could stabilize her arms.

Three people were involved in the clinical image capturing process.  A physician and a technical evaluator – were involved in capturing the images, while a third person documented relevant data, such as whether or not a specula was attached to the end of the device, and other important notes.  While these tasks could have been completed by two people, the addition of a third individual helped immensely with the speed and accuracy of the work being performed.

To collect feedback from reviewing clinicians we used an online survey tool and images stored in pdf documents. This allowed us to capture Clinical feedback from remote respondents, and asynchronously. There is variance in the types and sizes of screens that providers used to review images. While we couldn’t standardize this variable, we also noted that this represented a realistic factor in the review of actual clinical images sent for consultation.

General Image Collection Details

All tested devices connected via USB to a Dell Latitude 5480 laptop. As the devices output functioned as a USB-webcam we were able to capture still images using the Microsoft Camera App.

All images were captured in a single pass for each device, with multiple images captured for each subject.  Images were later reviewed to select the best sample for each subject, with an emphasis on choosing the sharpest images from each set.  Images that were not used were retained for future reference, but were placed in a separated folder from the final evaluation images.

Usability Testing Details

Ease of use was an important factor in our review, with an emphasis on how easily the devices could be used in a realistic clinical environment.  These hands-on experiences can help provide valuable information that might be missed during technical or clinical image capture.

Usability review can be a time consuming process. It can be done in small groups, but it is important that groups size remain small (two or three people) in order to make feedback collection manageable, and to get more candid individual feedback. It is helpful to treat this as an interview with the assessor asking questions and recording the feedback from the user. This allows the user to stay hands on with the device, and the reviewer to ask probing questions of the user.

Post-Mortem Thoughts

Several things became clear through the course of evaluating the video otoscopes.

  • As we noted in our previous otoscope toolkit, the use of so-called “technical” images may not be universally beneficial. Otoscopes are designed be used in a very specific imaging environment and it is challenging to create test images that emulate that environment and provide useful data.
  • We had a desire to include images of pathologies in our review. An IRB review was completed in order to use de-identified patient images in our review. However due to COVID-19 we determined it was better to capture images from volunteers.
  • Depth of field is a challenging characteristic to measure for otoscopes. The differences in how the devices focused, the types of lenses they used, and the overall variance in quality of depth of field

The Results

TTAC found three primary types of devices on the market, and can share some of the general trends we discovered among these devices in our review.

Dedicated Video Otoscope – These devices are single purpose video otoscopes and do not have interchangeable lenses. Most of these devices will have integrated light sources with variable brightness control. These devices will often use proprietary specula.

In terms of device characteristics these devices tend to have lower costs and have fewer features than the other types of devices on this list. In terms of use case, these devices may be useful for general screening, or primary care, but specialty providers might find the image quality and field of view unsatisfactory for their needs.

Multi-Exam Cameras with Otoscope Lenses – These multipurpose devices have interchangeable lenses that can allow them to capture various sorts of images. Common lens types include general exam lens, dermatology lens, and otoscope lens. These devices generally have integrated light sources with brightness control. They may use either a standard or proprietary specula.

These devices generally support good resolution and color accuracy, but they will often have a limited field of view compared to their endoscope based counterparts. For organizations that need flexible devices that can serve a variety of telemedicine use cases a multi-exam camera could provide good functionality for a variety of services including otoscope exams.

Endoscope Adapter with Otoscope Lens – These devices use a camera attached to an otoscope lens using an adapter, commonly a c-mount adapter. These devices can support a variety of rigid and flexible scopes, and the standard specula that these scopes support. These devices can also support a variety of battery powered, light box, or USB powered light sources.

These devices are generally designed for specialty care, and are more expensive than the other devices we reviewed. The main advantage of these devices is the wide field of view produced by the probe lens. As noted previously this wide field of view may result in some barrel distortion, but being able to have the entire tympanic membrane and surrounding ear canal in frame can be an important distinction.

Evaluation Scores

TTAC is unable to share specific numerical results at the device level from this evaluation as we are vendor neutral and federally funded.  Numerical scores may be construed as a product endorsement or recommendation, and we must be very mindful that we do not actively endorse any single product.

Technical Evaluation Score Summary

 

Technical Evaluation Scores by Device Type Field of View Resolution Color Depth of Field Average
Endoscope (N=4) 3.3 1.8 2.7 1.6 2.3
Multi-Exam Camera (N=3) 1.7 3.4 3.7 2.3 2.7
Dedicated Otoscope (N=4) 1.1 1.6 1.0 1.6 1.4
Average Score All Devices 2.2 2.3 2.6 1.9 2.2

 

Clinical Evaluation Score Summary

 

Clinical Evaluation Scores by Device Type Image Quality Color Accuracy Average
Endoscope (N=3) 3.1 3.0 3.0
Multi-Exam Camera (N=2) 2.7 3.0 2.8

 

Usability Score Summary

 

Device Usability Scores by Category Multi-Exam Camera (N=1) Endoscope (N=1)
Average of Fit & Feel Score 4.6 3.6
Average of Durability Score 4.6 4.0
Average of Controls Score 4.6 3.2
Average of One Handed Score 4.3 3.2
Average of Image Capture Score 4.6 4.0
Average of Cleaning Score 4.8 4.6
Average of Consumables Score 4.8 4.6
  4.6 3.9

 

Summary

Otoscope selection for an organization is going to vary based on the imaging, workflow, and logistical needs of that organization. There are trade-offs that need to be considered between image quality (resolution, field of view, color quality) and usability. If your use-case is primarily specialty based (ENT, Audiology) it may be more important to invest in a more robust imaging platform like an endoscope. On the other hand, if your use case is more targeted at primary care services the ease of use and flexibility of a Multi-Exam camera or the low-cost of a dedicated otoscope may be more important.

The TTAC can share a variety of other resources, which may be found in this toolkit.  These include product cut sheets, which allow for a comparison of many features of these otoscope cameras.  Additionally, there are sample images available.  These images are the same that were acquired by the TTAC in the course of testing the equipment.  The TTAC is also open to any questions about the process, or about how video otoscopes may be used within your organization and how to match your organizational needs with an appropriate product.

Back to Top