and up-to-date on our AI empowered digital cytology solutions.
.png)
- Retrospective study using 200 ThinPrep urine cytology slides.
- Ground truth: NHGUC (n=100), AUC (n=35), SHGUC (n=32), HGUC (n=33).
- Slides digitized using: (1) Mikroscan SLxCyto( Mikroscan-WSIs); (2) Customized Huron scanner (Huron-WSIs).
- WSIs were analyzed using AIxURO, a disease-specific AI algorithm for bladder cancer.
- Three-arm blinded review by 1 cytopathologist and 2 cytologists, with 2-week washout periods:
- Arm 1: Conventional microscopy
- Arm 2: AIxURO with Mikroscan-WSIs
- Arm 3: AIxURO with Huron-WSIs
- Performance Metrics:
- Comparison of study diagnosis with urine cytology ground truth diagnosis (n=200) for 3 arms, using following threshold: AUC+(AUC, SHGUC and HGUC) cases as positive; NHGUC cases as negative.
- Assessment of agreement between study diagnoses and surgical pathology-confirmed bladder cancer (n=72) among the three arms.
- Evaluation of bladder cancer prediction in hematuria-indicated cases (n=16) across the three arms.
- Urine cytology ground truth cases: Arm 2 and Arm3 exhibited higher sensitivity (85.0% and 88.3%) compared to Arm 1 (79.3%) with comparable accuracy across all arms (Arm 1: 86.8%, Arm 2: 85.3%, Arm 3: 85.3%).
- Biopsy-confirmed bladder cancer cases: Arm 2 and Arm 3 showed improved sensitivity (92.0% and 93.2%) and accuracy (82.9% and 82.4%) compared to Arm 1 (84.6% sensitivity, 78.2% accuracy).
- Hematuria cases: Arms 2 and 3 achieved superior sensitivity (96.7% and 100.0%) and accuracy (89.6% and 91.7%) compared to Arm 1 (90.0% sensitivity, 85.4% accuracy).
AIxURO consistently enhanced diagnostic sensitivity and maintained accuracy across multiple scanners, outperforming microscopy in detecting bladder cancer. It demonstrated strong predictive performance in high-risk subgroups, supporting its utility in real-world clinical workflows and digital cytology integration.
- 71 ThinPrep thyroid FNAC slides selected by consensus cytology diagnosis
- Cases included: TBS-II (n=35), TBS-IV (n=6), TBS-VI (n=30), confirmed by biopsy
- Slides digitized using 3DHistech scanner to create paired whole-slide images:
- S-WSI: single-layer WSI
- 7-WSI: seven-layer Z-stacked WSI
- AI-assisted review used an AIxTHY model to detect cancer cells and guide interpretation
- Three cytologists independently reviewed both S-WSI and 7-WSI sets
- Total of 213 reads (71 cases × 3 reviewers) analyzed
- Two diagnostic threshold:
- Threshold 1: Positive = TBS-V/VI; Negative = TBS-II
- Threshold 2: Positive = TBS-IV/V/VI; Negative = TBS-II
- Outcomes measured:
- Binary diagnostic sensitivity and specificity (under both thresholds)
- Agreement with consensus diagnoses
- Interobserver agreement (Cohen’s κ)
- Threshold 1 (TBS-V/VI vs. TBS-II):
- Sensitivity: 7-WSI 88.9% vs. S-WSI 81.1%
- Specificity: 7-WSI 94.3% vs. S-WSI 96.2% (not significant)
- Threshold 2 (TBS-IV/V/VI vs. TBS-II):
- Sensitivity: 7-WSI 86.1% vs. S-WSI 80.6% (p < 0.05)
- Specificity: 7-WSI (%): All cases: 7-WSI 88.6% vs. S-WSI 91.4%
- Consensus Agreement (%): All cases: 7-WSI 74.6%vs. S-WSI 61.5%
- TBS-VI: 65.6% vs. 36.7%
- TBS-IV: 44.4% vs. 27.8%
- TBS-II: 87.6% vs. 88.6%
- Interobserver Agreement (Cohen’s κ): Overall: 0.366 vs. 0.351
- TBS-IV: 7-WSI 0.189 vs. S-WSI 0.075
- TBS-VI: 0.420 vs. 0.319
- TBS-II: 0.549 vs. 0.604
Seven-layer Z-stacked WSIs (7-WSI) enhanced AI-assisted thyroid cancer diagnosis and improved interobserver agreement among cytologists, especially in indeterminate (TBS-IV) and malignant (TBS-VI) cases, supporting their clinical value over single-layer WSIs.
- Develop a machine-learning-based artificial intelligence (AI) model to assist monitoring morphologic changes in human embryonic stem cells (hESC) in color, using bright field microscopy images
- Pilot Study: Train the model to estimate degree of stem cell differentiation at the Hepatic Progenitor Cell (HPC stage), the critical checkpoint for hepatocyte differentiation, based on cellular morphologic features
- Initial training set: Expert annotated images of 341 successful HPC differentiations and 366 failed HPC differentiations
- Cross-validation set: Images of 86 successful and 51 failed HPC results
- Test set: Images of 64 successful and 29 failed HPC results
- Failed differentiation = no differentiation or differentiation into non-hepatocyte tissue types
- Performance Metrics: Accuracy and F1 scores of test set
The AI model showed excellent performance compared with the conventional method of determining degree of hepatocyte differentiation
- Accuracy = 0.978
- F1 score = 0.975
AI-assisted models have the potential to improve the detection of degrees of hepatocyte differentiation, thereby improving the efficiency of a manual process that is very time-intensive.
- Descriptive study using deidentified urine cytology slides digitized into WSI
- Artificial intelligence model training on slides with “active learning” to improve results
- Annotated WSI initially used to train computational model, then expert review of results with feedback to the model to learn.
- Sequence was repeated until satisfactory results were achieved
- AI deep-learning model was able to differentiate nucleus from cytoplasm to calculate N/C ratio using whole slide images (WSI)
- The model correctly provided statistical data (N/C ratio and nuclear size) on cells and successfully categorized them as atypical (NHGUC or AUC) or suspicious (SHGUC or HGUC) cells
AI-assistance for interpreting urine cytology using The Paris System for Reporting Urine Cytology has the potential to enhance abnormal cell detection and diagnostic concordance.
- Development of an automated deep-learning AI model for circulating tumor cell (CTC) analysis and enumeration
- Fluorescent microscopy CTC images (CK+/CD45-/DAPI+) collected from blood samples of non-small cell lung carcinoma patient by CMx CTC capture platform
- AI model developed with active learning implemented to train after expert image annotation on 20 slides and validation with 4 extra images
- 18 new test images studied for performance
- AI model predicted 34% more total CTC than current methods (1775 vs 1328)
- AI model recovered 45% more total CTCs absent from original human annotation (2507 vs 1732 events)
- AI model produced 90% time savings over conventional methods of enumeration (< 20 min vs approximately 4 hours)
- The model correctly characterized features of circulating tumor microthrombi (CTM), including CTC clusters and CTC-associated immune cells
An AI model trained to detect and enumerate circulating tumor cells in nonsmall cell lung cancer patients outperformed semiautomated methods, with higher sensitivity and significantly reduced review time (less than 20 minutes) for CTC enumeration in lung cancer specimens.
- Development of a deep-learning based image analysis model for cell classification and enumeration in urine cytology
- De-identified whole slide images (WSI) digitized and “active learning” approach used to train the model
- 3 sub-images (3335 cells) annotated by 3 domain experts for initial training
- Cells classified into 7 categories: High grade urothelial carcinoma (HGUC), cluster HGUC, atypical neoplastic cell, atypical reactive cell, inflammatory cell, epithelial cell, and unidentified cell, with expert feedback to the model
- Pilot study after training involved 2 sub-images from 5 digital slides (10 total sub-images)
- Ai model successfully learned the morphologies of all 6 cell types and was able to quantify total cell counts in each class
An artificial intelligence model that enumerates and classifies abnormal urothelial cells may improve urine cytology throughput, accuracy and reproducibility.