Toolkits

Video Otoscopes - Whitepaper

Selecting the “best” of any imaging device for a telehealth program can be a balancing act between adequate image quality and acceptable price.  Given that some of the cheapest otoscopes may cost 40 times less than their most expensive counterparts, it is more important than ever to spend the time and effort to perform a thorough evaluation of the products on the market.  This should include an assessment of the the quality of imagery that the devices are capable of producing.  The following section of this toolkit will be focusing specifically on how to set up a testing environment to ensure a balanced, fair assessment of the otoscope market within your own organization.

A review of this toolkit's Assessment Process section may help you consider the larger process of establishing minimum requirements, user profiles, and other relevant elements that play into the final purchasing decision. 

The Goals of Testing

The TTAC tested 16 different otoscopes in this evaluation.  After a short, initial hands-on session with the devices, it was clear to us that the differences in image quality between the devices – and occasionally with the same device – demanded a rigorous and controlled testing environment. Color accuracy, blooming, depth of field, field of view, vignette, lens distortion, focal range, sharpness, and resolution varied widely. We agreed to capture 17 test images that would provide a chance to assess each product.

The images captured were broken into two categories, clinical and technical. The utility of some of the technical images were questioned, as they pushed the devices beyond the scope of normal use.  However, these images may show alternative use cases for the otoscopes, and we decided that they are worth including in the evaluation process.  The images captured included:

  • Clinical Images
    • Tympanic Membrane 1 – an image of the right tympanic membrane and ear canal of subject 1
    • Tympanic Membrane 2 – an image of the right tympanic membrane and ear canal of subject 2
    • Uvula – an image taken of subject 1’s uvula from a distance of X cm
    • Intraoral Ulcer – image taken of an intraoral ulcer on the soft palate of subject 1’s mouth, near the right molars, taken at a variable distance
    • Eye – image taken of subject 2’s eye, taken at a variable distance
  • Technical Images
    • Dime (Full) – captured with a specula, with the dime filling the entire field of view
    • Dime (Partial) – captured with a specula, focusing on the “ear to year” on the dime
    • Macbeth Color Chart (3x3) – captured the lower left section of the Mini Macbeth color chart with the specula off
    • Macbeth Color Chart (3x2) – captured the lower left section of the Mini Macbeth color chart with the specula off
    • Accu-Chart (Square) – captured the central large square of the Accu-Chart target, with the image captured at varying distances to fill the frame
    • Accu-Chart (Circle) – captured the lower right circle of the Accu-Chart target, with the image captured at varying distances to fill the frame
    • Edmund Optics Resolution Chart (Boxes) – captured the two inner-most sets of lines on Edmund Optics USAF 1951 Tri-Bar Resolution Test Target
    • Edmund Optics Resolution Chart (Lines) – captured the vertical lines on the Edmund Optics test target, with the image captured at a distance of 1.5 cm
    • Mesh – captured a 1 cm circle of black nylon mesh on a white paper background
    • Resolution Square – captured a 1 cm circle with a set of concentric squares printed inside, custom designed by Stewart Ferguson and Jay Brudzinski for the AFHCAN program’s otoscope assessment
    • Corel Color Chart (Face) – captured an image of the face of the female subject on the Corel Color Chart at varying distances to frame the image from the top of the head to the top of the blouse
    • Corel Color Chart (Face and Bars) – captured an image of the female subject and the color bars on the Corel Color Chart

Capturing the Images

Note that not all images were captured on the same day.  One manufacturer sent us a new unit, as the borrowed unit had some issues that needed to be resolved. The time it took to capture the images extended beyond a single day.

Stabilizing the otoscope during image capture was important for the testing process in order to reduce the introduction of motion blur into the images. Technical images were captured with the devices mounted to a C-stand with both a C-clamp and a wooden hand screw clamp, with a rubber protective layer placed between the probe assembly and clamps to avoid damaging or marring the device surfaces.  Clinical images were more difficult to physically stabilize due to the need to navigate in and around other parts of the subject’s body.  Mounting the otoscope to a rigid platform was not an option in these cases; a solid desk or chair was provided on which the imager could stabilize his arms. Each image type was captured by the same imager for a every otoscope evaluated.

Stabilizing the otoscope during image capture can be a debatable testing procedure.  A valid argument against using clamps and a sturdy stand is that such supports will not be available to those using the devices for imaging in a clinical setting.  We ultimately decided that the image capture portion of our tests were meant to demonstrate the best images that could be produced by the otoscopes.  Additional testing should be performed to evaluate ease of use and other qualitative measures.

Devices that had a video output (rather than a data output) were connected to an Imperx VCE-Pro video capture card that was inserted into the PCMCIA slot of a Dell Latitude D360 laptop.  The connection was made via an s-video or composite cable, depending on the output of each otoscope.  Those products that had a USB output were connected to the USB port on the laptop, and their manufacturer’s software was used to capture the images.

Three people were involved in the image capturing process.  Two people – a physician and a technical evaluator – were involved in capturing the images, while a third person documented relevant data, such as the distance from the subject to the lens, whether or not a specula was attached to the end of the device, and other important notes.  These tasks could have been completed by two people, but the addition of a third individual helped immensely with the speed and accuracy of the work being performed.

All images were captured in a single pass for each device, with multiple images captured for each subject.  Images were later reviewed to select the best sample for each subject, with an emphasis on choosing the sharpest images from each set.  Images that were not used were retained for future reference, but were placed in a separated folder from the final evaluation images.

Additional Tests

The devices were also evaluated for a variety of other measures beyond color accuracy and detail.  The field of view was tested by marking the edge of the visible image on a piece of paper, with a single point placed onto the paper where the lens was located.  Lines were drawn from the edge points to the lense, and a protractor was then used to measure the resulting angle.

Ease of use was discussed in the course of evaluating the products, with an emphasis on how easily the devices could be used in a realistic clinical environment.  This hands-on experience can help provide valuable information that may be missed when imaging with the device mounted to a series of clamps and stands.  The evaluation criteria looked at how easily the device could be focused, the depth of field, how comfortably the device was to use, as well as issues around button placement and device construction.

Evaluating the Images

The final set of images to be evaluated were stored and labeled as required by the TTAC’s custom-made image evaluation software.  This application allows side-by-side comparison of images, with any zooming or panning actions performed on each image at the same time.  This allows the evaluation team to assess details in each image for all otoscopes.

The initial testing team, composed of a physician, a nurse, and a technical evaluator, was able to perform the assessment of the images in a couple of hours.  Typically, for an evaluation of this size, your team should plan to spend between two and four hours looking at the images.  A Likert scale of 1-5 was used in the assessment process (with one being the lowest score, and five the highest), and each image was evaluated separately for color and detail.

The team discovered that emphasizing a single measure at a time – color or detail – provided the most expedient and consistent review.  This results in assessing the images in two separate passes, but allows the team to focus on a single element of each image.  This helps prevent the tainting of a reviewer’s perspective should an image have terrible color but fantastic detail.

These evaluations of the images were performed on a large, flat-screen monitor. Ideally, the evaluation monitor should be properly calibrated for color and brightness, and, if that is not possible, should be representative of the monitors intended for use in the telemedicine system.  These tests are ultimately rather subjective in their focus, which can reduce the scientific nature of such an evaluation. That said, end users will be using their own subjective judgment when using the devices, and their perspective on image quality will be as relevant to your program’s success as a balanced and quantitatively-measured evaluation of color and quality.

Scores for color may be low for various reasons.  Inaccurate colors are easily spotted, whether from an overall color-cast, over-saturation, or contrast problem.  Note that a properly-calibrated monitor can help reduce the likelihood that the viewing station is introducing these color problems, as opposed to it being an actual flaw in the otoscope itself.  Additionally, the problem of “blooming”, or excessive highlights from light reflecting off of the subject surface, may reduce the color score as it negatively impacts the ability to accurately assess the color of the image.

Detail scores refer to the relative clarity and sharpness of the images.  This often required zooming in on the images to view smaller elements that could be compared between otoscopes.  There were often differing areas of detail in each image.  Issues from motion blur (camera movements making the image appear unclear), poor focusing (where the subject was not in focus during imaging), and poor depth-of-field (where only very small parts of the image are in focus) all can impact the quality score.

A Second Opinion

After the evaluation was completed by the initial testing team, a select number of images from the clinical image set were taken to a group of eight experienced store-and-forward Ear, Nose, and Throat doctors.  The ENT doctors ran through the photos, and the resulting discussion of scores was interesting as they all had very different views of the overall color and quality of each image.  Trends appeared, with some devices receiving generally higher scores, while others were generally lower.  However, even in these broad generalizations there were outliers.  Each clinician had a different opinion of what constituted a good image, and each had their own tolerance for color or detail issues.

The ensuing discussion led to some interesting points.  Questions arose as to what the devices were to be used for.  Some doctors would have been happy to use some of the devices for general screening, while others were only willing to give high scores to devices that they felt provided a level of diagnostic quality. Additional work could be done to develop a more standardized set of metrics for evaluating clinical images, which might add to the number of objective measurements for assessing video otoscopes.

Post-Mortem Thoughts

Several things became clear through the course of evaluating the video otoscopes.  First, the use of so-called “technical” images was not as beneficial as initially thought.  Many of the technical images were captured outside the normal range of operations for otoscopy.  This resulted in a series of images that were poorly illuminated, had poor color, or did not provide useful information about the performance of the products.  Only a handful of the technical images would likely be used again in future evaluations.  Future evaluations will likely include the clinical images, with additional clinical images that capture specific pathologies, as well as technical images that fall within normal operating parameters for the devices (i.e. no images that captured from many inches away, etc).

Additionally, there are several areas that could benefit from standardized metrics and more precise measurements.  This includes depth-of-field, effective resolution, and more standardized criteria for assessing the utility of the devices as screening or diagnostic tools.  Note that capturing various stages and types of pathology could assist in this area, as measures as simple as “can properly diagnosis this condition” would provide valuable information about the performance of the otoscopes.

The Results

The TTAC is unable to share the numerical results from this evaluation as we are federally funded.  Numerical scores may be construed as a product endorsement or recommendation, and we must be very mindful that we do not actively endorse any single product.  The overall evaluation process is much more involved than simply looking at a table of numbers and making a decision. The result of an evaluation needs to take into consideration the end users and their experience, the goals of implementing the device (screening as opposed to diagnosis), the needs of the organization, and the existing infrastructure that the equipment needs to work with.

The TTAC can share a variety of other resources, which may be found in this toolkit.  These include product cut sheets, which allow for a comparison of the many features of the otoscopes.  Additionally, there are sample images available.  These images are the same that were acquired by the TTAC in the course of testing the equipment.  The TTAC is also open to any questions about the process, or about how video otoscopes may be used within your organization and how to match your organizational needs with an appropriate product.

Back to Top