In recent times, breast cancer has become the most common type of cancer affecting women worldwide accounting for 25% of all cancer cases and affected 3.5 million people in 2017-18. Early diagnosis in these cases significantly increases the chances of survival. The key challenge in cancer detection is how to classify tumours into malignant or benign. Research indicates that most experienced physicians can diagnose cancer with 79% accuracy while using artificial intelligence based diagnosis, it is possible to achieve 91% accuracy.

NASSCOM CoE-IoT & AI & Eqounix Tech Lab organised a hands-on session on 28th September 2019, 10am onwards at the CoE Gurugram center on Breast Cancer detection. This was a paid session and there were over 50 attendees, consisting of basic & advanced developers from enterprises like Sopra Steria, United Health Group, TCS, HCL, Inventum Technologies, Publicis Sapient, Globallogic etc and startups like Attentive AI, Empass, Anasakta Labs, NEbulARC, SirionLabs & Vision Networkz. Students from ICGEB/JMI, Indira Gandhi Delhi Technical University etc also participated in the session.

The session first half focussed on Wisconsin Diagnostic Cancer (WDBC) & Invasive Ductal Carcinoma (IDC) datasets insights, Visualization of Dataset, feature selection and why they are chosen and how the physical parameters are translated into a dataset. In the second part, the focus was on Feature Selection and CNN, random forest based classification of cancer as malignant or benign followed by the optimised the deployment strategy & cost estimation


The samples consist of visually assessed nuclear features of Fine Needle Aspirates (FNAs) taken from patients. Attributes 3 to 11 were used to form a 9-dimensional vector which was used to obtain a neural network to discriminate between benign and malignant samples. Cross-validation was used to project the accuracy of the diagnostic algorithm.

Field Attribute
1Sample code number
2Class: 2 for benign, 4 for malignant
3Clump Thickness
4Uniformity of Cell Size
5Uniformity of Cell Shape
6Marginal Adhesion
7Single Epithelial Cell Size
8Bare Nuclei
9Bland Chromatin
10Normal Nucleoli


Invasive Ductal Carcinoma dataset originally consisted of 162 slide images, scanned at 40x. From that, 277,524 patches of 50×50 pixels (which is converted to  32 x 32pixels to fit the model architecture)  were extracted, including, 198,738 IDC negative examples & 78,786 IDC positive examples. Each image in the dataset is labelled based on the following parameters:

  • Patient ID: 10253_idx5
  • x-coordinate of the crop: 1,351
  • y-coordinate of the crop: 1,101
  • Class label: 0 (0 indicates no IDC while 1 indicates IDC)

CNN Architecture used:

  1. Used exclusively 3×3 CONV filters, similar to VGGNet
  2. Stacked multiple 3×3 CONV filters on top of each other prior to performing max-pooling (again,

similar to VGGNet)

  1. But unlike VGGNet, used depthwise separable convolution rather than standard convolution layers
  2. Keras Sequential API is used to build CancerNet

The model achieved 86% classification accuracy, 85% sensitivity, and 85% specificity.


Enter git clone and the repository URL at your command line:

git clone

  • : Builds the dataset by splitting images into training, validation and testing sets.
  • : Contains CancerNet breast cancer classification CNN
  • : Responsible for training and evaluation of Keras classification model



The next session will be focussed on the Prognostic system. This recently put into clinical practice, is a method that predicts when the cancer is likely to recur in patients that have had their cancers excised.  This gives the physician and the patient better information with which to plan treatment and may eliminate the need for a prognostic surgical procedure. The novel feature of the predictive approach is the ability to handle cases for which cancer has not recurred as well as cases for which cancer has recurred at a specific time.

Leave a Reply

Your email address will not be published. Required fields are marked *

WordPress Themes