DDSM: Deep-Learning Deep-Ultraviolet Scanning Microscope
Funded by: NIH Grant 5R01EB033806
License: GPL-3.0
Data Details: https://pmc.ncbi.nlm.nih.gov/articles/PMC9484420/pdf/boe-13-9-5015.pdf
Updated Classifier(s): https://github.com/Yatagarasu50469/RANDS
Original Deep Learning Classifier: https://github.com/tyrellto/breast-cancer-research/tree/main
Description: Breast tissue imaging datasets, captured using a DDSM or Deep Ultraviolet Scanning Miscroscope.
Datasets
Dataset_1-4x_mixed_80Perc
Date Created: 13 January 2025
Internal Ref: ARCHIVE/Dataset_1-breast-corrected-unadjusted.zip
Note(s):
- This set was generated to be a more consistent, direct replacement for Dataset_0-4x_mixed_80Perc
- Errors from stitching regularly observed (ex. 16_3, 40_2)
- At least a couple samples appear to have used raw, not corrected blocks (ex. 7_2, 8_1, 19_2).
Description:
- Dataset containing (benign/malignant) annotated 400x400 patches, WSI from the same samples (2-70), and WSI from samples 72-116, acquired at 4x resolution.
- These files were retrieved from Tyrell To files in August 2024; it cannot be wholly guaranteed that these are the exact image files used in associated publication(s).
- This is intended to be purely a historical/archive copy and not for active research use.
- 80% threshold to include patches (>x% of pixels in a patch with red-channel value >=5)
Notes:
It should be recorded that the 'Decision Fusion' mechanism, for WSI classification only, not originally having locations for the patches, split and classified the WSI images. Ground-truth labels for the WSI were either determined from the annotated patches, using the same basis as the prediction labels (where the WSI is malignant if > q patches/WSI) or else using a person-level cancer diagnosis. (Replication of original results occurred using the former approach, with q values between 0.12-0.19). In the published research, the samples in this dataset were split into folds (excluding samples with mixed patch labels) as follows:
Fold 1: ['2_1', '9_3', '11_3', '16_3', '34_1', '36_2', '40_2', '54_2', '57_2', '60_1', '62_1']
Fold 2: ['17_5', '20_3', '23_3', '24_2', '28_2', '30_2', '33_3', '51_2', '52_2', '59_2', '63_3', '66_2']
Fold 3: ['12_1', '14_2', '22_3', '26_3', '35_4', '44_1', '45_1', '47_2', '49_1', '53_2', '56_2', '68_1']
Fold 4: ['4_4', '5_3', '8_1', '10_3', '25_3', '27_1', '29_2', '37_1', '42_3', '48_3', '50_1', '69_1']
Fold 5: ['7_2', '15_4', '19_2', '31_1', '43_1', '46_2', '55_2', '58_2', '61_1', '64_1', '65_1', '67_1', '70_1']
The patches for any given sample are not necessarily extracted from the exact WSI image included (some have undergone additional color/brightness, adjustment/correction, and/or file compression). Records for exactly how the patches were extracted are incomplete, though generally utilized some/all of the following: Sobel thresholding of the red channel, 70% of the 1% brightest pixels, >=80% of red channel values > 5, empirical/multiple rectangular crops, and different raster-style crops to obtain 400x400 patches. A single processing script (at least without referencing the final patch images themselves) has proven insufficient to independently obtain patches at the exact, originally annotated locations. Depending on method/parameter selections, some files do produce exact matches, but never all.
Dataset_0-4x_mixed_80Perc
Date Created: 13 January 2025
Internal Ref: ARCHIVE/Dataset_0-breast-mixed-Tyrell-Tongtong(-labeled).zip
Note(s):
- Errors from stitching regularly observed (ex. 16, 40)
- At least a couple samples appear to have used raw, not corrected blocks (ex. 7, 8, 19).
Description:
- Dataset containing (benign/malignant) annotated 400x400 patches, WSI from the same samples (2-70), and WSI from samples 72-116, acquired at 4x resolution.
- These files were retrieved from Tyrell To files in August 2024; it cannot be wholly guaranteed that these are the exact image files used in associated publication(s).
- This is intended to be purely a historical/archive copy and not for active research use.
- 80% threshold to include patches (>x% of pixels in a patch with red-channel value >=5)
Notes:
It should be recorded that the 'Decision Fusion' mechanism, for WSI classification only, not originally having locations for the patches, split and classified the WSI images. Ground-truth labels for the WSI were either determined from the annotated patches, using the same basis as the prediction labels (where the WSI is malignant if > q patches/WSI) or else using a person-level cancer diagnosis. (Replication of original results occurred using the former approach, with q values between 0.12-0.19). In the published research, the samples in this dataset were split into folds (excluding samples with mixed patch labels) as follows:
Fold 1: ['2', '9', '11', '16', '34', '36', '40', '54', '57', '60', '62']
Fold 2: ['17', '20', '23', '24', '28', '30', '33', '51', '52', '59', '63', '66']
Fold 3: ['12', '14', '22', '26', '35', '44', '45', '47', '49', '53', '56', '68']
Fold 4: ['4', '5_3', '8', '10', '25', '27', '29', '37', '42', '48', '50', '69']
Fold 5: ['7', '15', '19', '31', '43', '46', '55', '58', '61', '64', '65', '67', '70']
The patches for any given sample are not necessarily extracted from the exact WSI image included (some have undergone additional color/brightness, adjustment/correction, and/or file compression). Records for exactly how the patches were extracted are incomplete, though generally utilized some/all of the following: Sobel thresholding of the red channel, 70% of the 1% brightest pixels, >=80% of red channel values > 5, empirical/multiple rectangular crops, and different raster-style crops to obtain 400x400 patches. A single processing script (at least without referencing the final patch images themselves) has proven insufficient to independently obtain patches at the exact, originally annotated locations. Depending on method/parameter selections, some files do produce exact matches, but never all.
Download Directions
Assuming Python is already available on your system, open a command prompt or terminal and navigate to the location you wish the download to be stored.
#Install the needed Python library and hf_xet for improved performance
pip install huggingface_hub
pip install hf_xet
#Start a python interpreter
python
#If the repository is still listed to be private, use the following lines to login
#You will need to replace yourTokenGoesHere with your own account token; see: huggingface.co/docs/hub/en/security-tokens
from huggingface_hub import login
login("yourTokenGoesHere")
#Run the following lines to perform the download; change dataset as desired
from huggingface_hub import snapshot_download
dataset = "Dataset_0-4x_mixed_80Perc"
snapshot_download(repo_id="BLISS-MU/DDSM", repo_type="dataset", local_dir='./', allow_patterns=dataset+'*')
Dataset Structure
Dataset_[#]-[Zoom]x_[PostProcessingLevel_[BackgroundThreshold]Perc
|----->INPUT_PATCHES/
| |----->metadata_patches.csv
| |----->S*/
| | |----->PS*_*.jpg/.tif
|----->INPUT_WSI/
| |----->*.jpg/.tif
Note: Dataset_0-4x_mixed_80Perc/INPUT_PATCHES uses 'Patch_list.xlsx' instead of 'metadata_patches.csv' and also includes 'Annotation_labels_explaination.txt'
Citations
XGB and ViT Classification
Dataset: Dataset_1-4x_mixed_80Perc
Citation(s): P. Afshin et al., ‘Breast Cancer Classification in Deep Ultraviolet Fluorescence Images Using a Patch-Level Vision Transformer Framework’, arXiv preprint arXiv:2505. 07654, 2025.
Available: https://arxiv.org/pdf/2505.07654?
XGB Classification
Dataset: Dataset_0-4x_mixed_80Perc
Repository: https://github.com/tyrellto/breast-cancer-research/tree/main
Note: Original XGB network; entirely rewritten/updated inside RANDS code repository.
XGB and Diffusion Classification
Dataset: Dataset_0-4x_mixed_80Perc
Citation(s): G. S. Salem, T. To, J. Jorns, T. Yen, B. Yu, and D. H. Ye, “Deep learning for automated detection of breast cancer in deep ultraviolet fluorescence images with diffusion probabilistic model,” arXiv (Cornell University), Jul. 2024, doi: 10.1109/isbi56570.2024.10635349.
Available: https://pubmed.ncbi.nlm.nih.gov/40313564/
XGB Classification
Dataset: Dataset_0-4x_mixed_80Perc
Citation(s): T. To et al., “Deep learning classification of deep ultraviolet fluorescence images toward intra-operative margin assessment in breast cancer,” Frontiers in Oncology, vol. 13, Jun. 2023, doi: 10.3389/fonc.2023.1179025
Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC10313133/
XGB Classification
Dataset: Dataset_0-4x_mixed_80Perc
Citation(s): Lu T, Jorns JM, Ye DH, Patton M, Gilat-Schmidt T, Yen T, Yu B. Analysis of Deep Ultraviolet Fluorescence Images for Intraoperative Breast Tumor Margin Assessment. Proc SPIE Int Soc Opt Eng. 2023 Jan-Feb;12368:1236806. doi: 10.1117/12.2649552. Epub 2023 Mar 6. PMID: 37292087; PMCID: PMC10249647.
Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC10249647/
XGB Classification
Dataset: Dataset_0-4x_mixed_80Perc
Citation(s): T. To, “Deep Learning Classification of Deep Ultraviolet Fluorescence Images for Margin Assessment During Breast Cancer Surgery,” Master’s Thesis, Marquette University, 2023.
Available: https://epublications.marquette.edu/theses_open/768
Texture Analysis Classification
Dataset: Dataset_0-4x_mixed_80Perc
Citation(s): Lu T, Jorns JM, Ye DH, Patton M, Fisher R, Emmrich A, Schmidt TG, Yen T, Yu B. Automated assessment of breast margins in deep ultraviolet fluorescence images using texture analysis. Biomed Opt Express. 2022 Aug 30;13(9):5015-5034. doi: 10.1364/BOE.464547. PMID: 36187258; PMCID: PMC9484420.
Available: https://pmc.ncbi.nlm.nih.gov/articles/PMC9484420/
XGB Classification
Dataset: Dataset_0-4x_mixed_80Perc
Citation(s): T. To, S. H. Gheshlaghi and D. H. Ye, "Deep Learning for Breast Cancer Classification of Deep Ultraviolet Fluorescence Images toward Intra-Operative Margin Assessment," 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, Scotland, United Kingdom, 2022, pp. 1891-1894, doi: 10.1109/EMBC48229.2022.9871819
Available: https://doi.org/10.1109/EMBC48229.2022.9871819
Experimental DUV-FSM Platform (1st Generation)
Dataset: Dataset_0-4x_mixed_80Perc
Citation(s): Lu T, Jorns JM, Patton M, Fisher R, Emmrich A, Doehring T, Schmidt TG, Ye DH, Yen T, Yu B. Rapid assessment of breast tumor margins using deep ultraviolet fluorescence scanning microscopy. J Biomed Opt. 2020 Nov;25(12):126501. doi: 10.1117/1.JBO.25.12.126501. PMID: 33241673; PMCID: PMC7688317.
Available: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7688317/
- Downloads last month
- 1