and up-to-date on our AI empowered digital cytology solutions.
.png)
- 242 urine cytology slides (74 AUC, 56 SHGUC, 112 HGUC) from patients with biopsy-confirmed HGUC within a 6-month window
- 162 (67%) Cytospin, 60 (25%) TP-UroCyte, 20 (8%) BD CytoRich
- 124,980 abnormal cells inferred by AI-assisted digital software system (AIxURO)
- 54% of biopsy proven CIS or HGUC cases had a cytology interpretation of AUC or SHGUC; 46% cytology interpretation of HGUC
- Evaluation of N/C ratios and nuclear sizes for the top 24 most abnormal cells, and for the categories of suspicious cells vs atypical cells, with statistical significance
- Average N/C ratios:some text
- Top 24 abnormal cells = 0.66 (95% CI; 0.65-0.66)
- Suspicious cell category = 0.65 (95% CI; 0.64-0.65)
- Atypical cell category = 0.57 (95% CI; 0.57-0.57)(p < 0.0001)
- Average nuclear size:some text
- Top 24 abnormal cells = 108.1-116.8 µm2
- Atypical cell category = 86.3-87.7 µm2
- Significantly larger nuclear sizes occurred in the top 24 abnormal and suspicious cell categories than in atypical cells category
- Nuclear size of biopsy:CIS cells was significantly larger than those of biopsy:HGUC cells for all AI-categories
- Slightly more HGUC biopsy cases were cytologically interpreted as AUC or SHGUC than as HGUC
AI-assistance demonstrates a predictable quantitative advantage for assessment of nuclear size and N/C ratio when assessing atypical and suspicious cells using The Paris System. The average N/C ratio is lower (0.66) than that suggested for HGUC/SHGUC (0.70) in TPS.
- Retrospective cohort study
- 185 upper tract urine cytology slides (168 NHGUC, 14 AUC, 2 SHGUC, 1 HGUC) with one expert cytopathologist (CP) and one experienced cytologist (CT) confirmation of interpretation; discrepancies in diagnosis were resolved by multiheaded microscopy review by expert panel
- Digitized using Aperio AT2 scanner (Leica Biosystems) at 40X and single Z-plane
- Deep-learning training
- Cases ranked by AI-driven software into low risk (N/C 0.5 to 0.7) or high risk (N/C > 0.7)
- 37 discrepant results after AI analysis (AIxURO)
- Discrepancies (AIxURO vs conventional):
- Cytopathologist:
- Overcalled 1 NHGUC as SHGUC
- Undercalled 2 AUC as NHGUC
- Cytologist:
- Overcalled 3 NHGUC as AUC and 2 AUC as SHGUC
- Undercalled 9 AUC as NHGUC, and 1 SHGUC
- o NHGUC (20 of 168; 11.9% discrepancy rate)
- Cytopathologist:
Diagnostic accuracy is achieved in at least 85.7% for atypical and suspicious cells in the AUC and above categories, with AUC showing the least concordance (21.4% accuracy). The use of AI-assistance markedly reduced the miscall rate for the CP but not the CT compared to reported misdiagnosis rates as high as 27.6% (57 million cases in China).
- 116 urine (76 cytospin; 40 CytoRich) cytology slides with 3-armed microscopy, corresponding WSI and AI-digital (AIxURO) review by 1 experienced cytopathologist and 2 cytologists
- Performance metrics calculated for each arm included binary (negative vs positive) diagnosis, inter-and intra-observer agreement, and screening time
- Atypical Urothelial Cells (AUC): AIxURO improved diagnostic sensitivity (from 25-30.6% to 63.9%), PPV (from 21.6-24.3% to 31.1%), and NPV (91.3-19.6%) to 95.3%)
- Suspicious for High-Grade Urothelial Carcinoma (SHGUC): AIxURO improved sensitivity (from 15.2-27.3% to 33.3%), PPV (from 31.3-47.4% to 61.1%), and NPV (from 91.6%-92.7% to 93.3%).
- Binary Diagnosis (Negative vs Positive [AUC, SHGUC, or HGUC]): AIxURO improved sensitivity (from 77.8-82.2% to 90.0%) and NPV (from 91.7-93.4% to 95.8%)
- Interobserver agreement: Moderate concurrence across all methods of evaluation (ĸ = 0.57-0.61); cytopathologist showed the highest intraobserver agreement (ĸ = 0.75-0.88)
- Screening time: AIxURO significantly reduced screening time compared to conventional microscopy for all observers (by 52.3% to 83.2%); AUC case
The most significant finding is the marked reduction in screening time for AI-enhancement (AIxURO) compared with conventional microscopy (up to 83% less time required). Implementation of AI enhancement (AIxURO) for urine cytology interpretation improves diagnostic sensitivity, PPV and NPV for AUC and SHGUC, but not HGUC.
AIxURO improves the sensitivity and NPV of a binary interpretation of negative or positive. The interobserver agreement across all methods of review (microscopy, WSI and AIxURO) is moderate (ĸ = 0.57-0.61) with the cytopathologist showing the highest intra-observer agreement.
- 200 urine cytology slides (100 positive, 100 negative) were scanned to create whole slide images (WSI) that were analyzed by an artificial intelligence (AI)-assisted software program (AIxURO) to detect and quantify characteristics of abnormal urothelial cells
- Three study arms, each performed by 3 reviewers (1 cytopathologist, 2 cytologists) rendering the Paris System (TPS) 2.0 interpretation (2-week washout period between each arm):
- ARM 1- Glass slide microscopic interpretation
- ARM 2- Whole slide image interpretation
- ARM 3- WSI with AI-assisted interpretation (AIxURO)
- Performance Metrics: Total screening/reporting time, sensitivity and specificity compared to the ground truth diagnosis
- Average screening and reporting time was significantly reduced by 25.8%-58.7% (p < 0.05)
- Microscopy only and AI-assisted (AIxURO) outperformed WSI-only review in both sensitivity and specificity
- AIxURO was slightly less sensitive than microscopy (66.0 - 87.0% vs. 86.0 -89.0%) but more specific (89.0 – 95.0% vs. 81.0 – 88.0%).
- Use of AIxURO reclassified some ground-truth diagnoses from HGUC or SHGUC to AUC or NHGUC.
The use of an AI-assisted software platform (AIxURO) for detection of bladder carcinoma improves overall specificity in comparison with microscopic glass slide or whole slide imaging review, while significantly reducing screening and reporting time.
- 1856 urine cytology cases (1466 negative and 390 positive)- AI training set
- 169 urine cytology cases (88 negative, 81 positive)- Validation set
- AIxURO classifies abnormal urothelial cells into 2 categories based on The Paris System 2.0: “Suspicious” (SHGUC or HGUC) and “Atypical “(AUC), with the final interpretation deferred to a pathologist
- Logistic regression performed to predict presence of cancer, including variables such as total # suspicious cells, total # atypical cells, and predictive accuracy using sensitivity and specificity
- Optimal performance of the training set (based on the total number of atypical cells) was 10 cells (cytospin) and 49 cells (CytoRich)
- Training Set Sensitivity and Specificity: 75.9% and 73.0%
- Validation Set Sensitivity and Specificity: 75.3% and 87.5%
The logistic model supports the optimal cut-off values of at least 10 cells (cytospin) and 49 cells (CytoRich) for the number of atypical cells required for a high concordance with bladder cancer as the final outcome.
- 52 urine cytology slides (cytospin, ThinPrep, and CytoRIch) scanned with 21 Z-plane and a heuristic scan simulation method to generate whole slide images (WSI) using a Leica Aperio AT2 scanner
- An AI algorithm inferred the WSI from 21 Z-planes to quantitate total number of cells suspicious for high grade urothelial carcinoma (SHGUC) / high-grade urothelial carcinoma (HGUC)=[SHGUC+]
- The heuristic scan simulation calculated the total number of SHGUC+ using the 21 Z-plane scan data
- Performance metrics evaluated were SHGUC+ cell coverage rates, scanning times, file size, and AI-aided interpretation of WSI compared to the original cytology diagnosis for the 21 Z-plane scan and the heuristic scan
- SHGUC+ Coverage Rates: Heuristic scanning coverage rates were similar to 5 Z-plane scans for all 3 preparation types (0.78 to 0.91 vs 0.75 to 0.88; p = 0.451 to 0.578)
- Scanning Time: Heuristic scanning significantly reduced scanning time (137.2 to 635.0 seconds vs 332.6 to 1,278.8 seconds; p < 0.05)
- Image File Size: Heuristic scanning significantly reduced image file size (0.51 to 2.10 GB vs. 1.16 to 3.10 GB; p < 0.05)
- AI-aided Interpretation: Heuristic scanning had higher rates of accurate interpretation compared to single Z-plan scanning (62.5% vs. 37.5%)
Heuristic scanning showed improved scanning times and AI-aided cytologic interpretation while reducing overall image file size and maintaining similar scanning coverage for SHGUC+ cells for 3 urine cytology preparation types.
- 116 urine cytology slides (76 Cytospin, 40 CytoRich)
- Consensus diagnosis by two senior cytopathologists for ground truth, using TPS2.0, resulting in
- 30 positive slides (AUC/SHGUC/HGUC)
- 86 negative slides (NHGUC)
- Consensus diagnosis by two senior cytopathologists for ground truth, using TPS2.0, resulting in
- 3-Arm study with 1 cytopathologist (CP) and 2 cytologist (CT) reviewers and a 2-week washout period between each arm; and analysis of the cytopathologist paired with one of the cytologists to mimic clinical practice of cytologist review and referral to cytopathologist
- Arm 1: Microscopy
- Arm 2: Digital whole slide image review
- Arm 3: Digital image review using artificial intelligence software (AIxURO)
- Performances Metrics: Sensitivity, specificity, PPV, NPV, accuracy and total diagnostic (review) time
- Sensitivity: Improved with AI-assistance for CP+CTA (90% vs 76.7%) and CP+CTB (76.7% vs 76.7%) compared with microscopy (but not with WSI review alone)
- Negative Predictive Value: Improved with AI-assistance for CP+CTA (96.4% vs 92.2%) and CP+CTB (92% vs 92.3%) compared with microscopy (but not with WSI review alone)
- Specificity: Decreased in AI-assistance for CP+CTA (93% vs 96.5%) and CP+CTB (92% vs 92.3%) compared to microscopy alone
- Positive Predictive Value: Decreased in AI-assistance for CP+CTA (81.8% vs 88.5%) and CP+CTB (79.3% vs 92%) compared to microscopy alone
- Overall, Arm 2 (WSI review) showed no improvement compared to microscopy for either CP+CTA or CP+CTB
- Overall Review Time: AI-assisted review decreased the total review time (72.2 min and 110.4 min), compared to microscopy (210.2 min and 244.7 min), whereas WSI review took as long or longer (227.1 min and 243.8 min) than microscopy
AI-assisted urine cytology review markedly reduces review time while increasing sensitivity and NPV using TPS2.0 compared to conventional microscopy. However, specificity, PPV and accuracy are slightly diminished. Pairing a cytopathologist (CP) with different cytologists (CT) also influences the diagnostic outcomes and metrics.
- 116 urine cytology slides with consensus diagnosis (ground truth) by a panel of experts to 86 NHGUC, 12 AUC, 11 SHGUC, and 7 HGUC, scanned with Leica Aperio AT2 to create a whole slide image (WSI)
- 1 Cytopathologist and 2 cytologists reviewed all slides/images in each arm independently, recording the time required to diagnosis and The Paris System (TPS) cytologic diagnosis, with a 2-week washout period between each review
- Arm 1: Microscopic review of the cytology slide
- Arm 2: Review of the scanned whole slide image (WSI)
- Arm 3: Review of images with AI-assistance software (AIxURO)
- Metrics: TPS diagnostic category compared with the expert panel and Total time spent on review
- NHGUC, SHGUC and HGUC: AIxURO showed higher specificity than microscopy, but lower sensitivity
- AUC: AIxURO showed higher sensitivity but lower specificity than microscopy
- The performance of Arm 2 (using the whole slide digital image only for analysis) was poorest overall, compared to the other 2 arms
- There were performance inconsistencies between reviewers. For example, sensitivity for SHGUC was decreased for the pathologist and one cytologist, whereas the other cytologist noted an increase in sensitivity. One cytologist had a decrease in sensitivity for HGUC compared to the other 2 reviewers.
- AIxURO showed the largest reduction in time spent on review (32-45% less for the pathologist and 10-62% for the cytologists). The cytopathologist took the longest time to review AUC and the shortest for HGUC, whereas the cytologists took the most time to review SHGUC and the least on NHGUC.
AI-assisted AIxURO outperformed microscopy in diagnostic accuracy for AUC while maintaining comparable accuracy across other TPS categories and significantly reducing total review time for all reviewers. WSI review alone did not improve diagnostic accuracy or efficiency.
- 14 urine cytology specimens, equally divided and prepped into CytoSpin and ThinPrep specimens, were scanned for WSI by Leica Aperio AT2 and Hamamatsu NanoZoom S360 at 3 focus modes (default, semiautomatic, and manual) for single layer scanning and with a manual focus mode for 21 Z-layer scanning
- Performance metrics evaluated included scanning success rate, AI-algorithm-inferred atypical cell numbers and coverage rates (atypical cells in single or multiple Z-layers divided by total atypical cells), scanning time, and image file size
Success Scanning Rates
- Default Mode Scanning: 85.7% (Cytospin, Leica) and 92.9% (ThinPrep, Leica; Cytospin and ThinPrep, Hamamatsu) success rates
- Semi-Auto Mode Scanning: 92.9% (Cytospin, Leica; Cytospin, Hamamatsu and ThinPrep, Hamamatsu) or 100% (ThinPrep, Leica) success
- Manual Scanning: 100% success (all preps with all scanners)
Scanning Times (median)
- Cytospin, Leica: 1010 seconds
- Cytospin, Hamamatsu: 275 seconds
- ThinPrep, Leica: 5357 seconds
- ThinPrep, Hamamatsu: 1429 seconds
File Image Sizes of WSI (using manual focus mode and 21 Z-layer scan settings)
- Cytospin, Leica: 2.0 GB
- Cytospin, Hamamatsu: 4.4 GB
- ThinPrep, Leica: 13.1 GB
- ThinPrep, Hamamatsu: 25.5 GB
Semi-automatic and manual focus modes provide the most successful slide scanning with rates up to 100%. A minimum of 9-layer Z-stacking at 1 µm intervals is necessary to cover 80% of atypical cells. Cytospin preparations take less time to scan and result in smaller file sizes than do ThinPrep slides, regardless of the scanner. Z-stacking enhances AI inferred quality and coverage rates of atypical cells but with longer scanning times and larger image files.
- 20 urine cytology slides, 5 types of preparation (cytospin, ThinPrep nonGYN, ThinPrep Urocyte, and BD CytoRich) were digitized using 3 digital scanners (Roch DP200, Roch DP600, and Hamamatsu Nanozoomer 360)
- Images were evaluated for quality, focus and color from the default, manual and advanced scanning modes by a senior cytologist
- Roche DP200 and DP600 scanners achieved good quality WSI (30-50% in default mode and 40-65% in manual mode)
- Hamamatsu achieved 90% good quality imaging only in the manual mode, whereas the default quality dropped to 15%. It also had a lower coefficient of variation for average number of atypical cells across all preparation types
Overall, Hamamatsu scanner showed better performance for image quality in the manual mode than Roche scanners, which performed better in the default mode. Hamamatsu scanners also had improved detection of atypical urothelial cells among triplicate images from the same samples compared to Roche scanners.