Liver cirrhosis demands accurate, interpretable MRI analysis to support timely diagnosis and staging, yet existing methods often
decouple segmentation from classification and lack the explainability required for clinical use. This work introduces 3UResNet,
an ensemble of U-Net, TransUNet++, and Attention U-Net with ResNet50 as encoder, designed for robust multiclass cirrhosis segmentation
on 2D T2-weighted MRI from the CirrMRI600+ dataset. A total of 5,364 training, 674 validation, and 664 test samples across mild, moderate,
and severe classes were preprocessed with augmentation, normalization, and mask binarization.
This research also designed three Vision Transformer–based models, namely SwinUnet, CCTUnet, and EaNetUnet for segmentation.
On the test set, 3UResNet achieved a Dice score of 0.9516 and a mean Intersection over Union of 0.9077 across all classes, outperforming the transformer
baselines. After accurate segmentation, this study used the segmented masks to extract shape features and applied convolutional neural networks (CNN) to extract
visual features for classification. DenseNet121 performed best for mild cirrhosis (95%), while CoAnNet achieved higher accuracy for moderate (80%) and severe
cirrhosis (94%). Here, a WeightedRandom sampler and focal loss were employed to mitigate class imbalance during training, ensuring results reflect real-world
clinical distributions. Model interpretability is strengthened with LIME and Grad-CAM, highlighting clinically meaningful liver regions and aligning attention
with cirrhotic anatomy across severity levels. This study demonstrates a practical end-to-end framework uniting high-fidelity segmentation with severity staging
and transparent rationale, paving the way for clinical adoption.