The COVID-19 pandemic has caused a shift from on-campus to remote online examinations, which are usually difficult to invigilate. Meanwhile, closed-ended question formats, such as true–false (TF), are particularly suited to these examination conditions, as they allow automatic marking by computer software. While previous studies have reported the score characteristics in TF questions in conventional supervised examinations, this study investigates the efficacy of using TF questions in online, unsupervised examinations at the undergraduate level of Biomedical Engineering. We examine the TF and other question-type scores of 57 students across three examinations held in 2020 under online, unsupervised conditions. Our analysis shows significantly larger coefficient of variance (CV) in scores in TF questions (42.7%) than other question types (22.3%). The high CV in TF questions may be explained by different answering strategies among students, with 13.3 ± 17.2% of TF questions left unanswered (zero marks) and 16.4 ± 11.5% of TF questions guessed incorrectly (negative marks awarded). In unsupervised, open-book examination where sharing of answers among students is a potential risk; questions that induce a larger variation in responses may be desirable to differentiate among students. We also observed a significant relationship (r = 0.64, p < 0.05) between TF scores and the overall subject scores, indicating that TF questions are an effective predictor of overall student performance. Our results from this initial analysis suggests that TF questions are useful for assessing biomedical-theme content in online, unsupervised examinations, and are encouraging for their ongoing use in future assessments.