<p dir="ltr">Existing tissue adaptive deep learning-based ultrasound image reconstruction models are computationally intensive, and achieving high frame rates particularly in the application of B-mode imaging on resource-constrained devices is challenging. Based on a traditional firm guarantee real-time model, i.e., the (m,k)-model, we propose a methodology that can take advantage of the similarity among feature maps generated by continuous frames to reduce the redundant computations and, thus, significantly improve the throughput and energy efficiency. A comprehensive evaluation on in-vitro and in-vivo data using U-Net and GoogLeNet-based beamformers demonstrated over a 50% reduction in computational complexity while preserving the original image quality.</p>