This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Massachusetts General Hospital and Harvard Medical School

A Prompt-driven Universal Model for View-Agnostic Echocardiography Analysis

Sekeun Kim 11    Hui Ren 11    Peng Guo 11    Abder-Rahman Ali 11    Patrick Zhang 11    Kyungsang Kim 11    Quanzheng Li 11    Xiang Li 11
Abstract

Echocardiography segmentation for cardiac analysis is time-consuming and resource-intensive due to the variability in image quality and the necessity to process scans from various standard views. While current automated segmentation methods in echocardiography show promising performance, they are trained on specific scan views to analyze corresponding data. However, this solution has a limitation as the number of required models increases with the number of standard views. To address this, in this paper, we present a prompt-driven universal method for view-agnostic echocardiography analysis. Considering the domain shift between standard views, we first introduce a method called prompt matching, aimed at learning prompts specific to different views by matching prompts and querying input embeddings using a pre-trained vision model. Then, we utilized a pre-trained medical language model to align textual information with pixel data for accurate segmentation. Extensive experiments on three standard views showed that our approach significantly outperforms the state-of-the-art universal methods and achieves comparable or even better performances over the segmentation model trained and tested on same views.

Keywords:
Universal Model Prompt learning Visual-Language Echocardiography

1 Introduction

Echocardiography is the most frequently used imaging modality performed in cardiology, which helps assessment of cardiac function by examining different views by multiple standard views scanning. Given the complexity in image analysis and workload of sonographers, there is a growing interest in developing automated methods for segmentation in Echo [8, 9, 11]. Existing methods have demonstrated superior performance in accurately delineating anatomical structures within specific views when they are trained on corresponding datasets. This process involves a step of identifying the desired views in the patient study before conducting the analysis, which requires an additional steps to select the appropriate files in the scan files [3, 6]. The exploration of a general model capable of independently performing echocardiography segmentation tasks across multiple standard views has not been explored.
Currently, the general solution is training N models on N standard views. However, this solutions have a limitation as the number of models are increases with the number of standard views. One naive approach to creating a universal model is to train the same network on data from various standard view scans. This approach may result in a performance degradation of the model, as each standard view manifests its distinct visual characteristics [9, 15]. Echocardiography poses distinct challenges, including view domain shift among scan views and sparse annotation across frames. While there may not be identical solutions to these problems, similar universal models have been developed [23, 2, 13, 22]. DoDNet[23] introduces a dynamic head with an encoder-decoder architecture. Different task information is encoded to one-hot vector, and it controls the task-specific controller. CLIP-drvien universal model[13] extends this idea by utilizing a pre-trained text model and managing segmentation heads using semantically embedded class features. Although this CLIP-based strategy has demonstrated success in CT organ segmentation tasks, it has limitations in the medical framework due to disparities between natural and medical texts. UniSeg[22] approaches with learnable prompts have been developed to address various segmentation tasks in CT,MR, and PET. However, it introduces a universal method using three anatomically similar datasets, where the images may differ only in texture while the underlying anatomy remains the same. Consequently, approach is likely to be less suitable for addressing view shifts in echocardiography, potentially resulting in suboptimal performance as in Table 2.
To address above problems, we propose a prompt-driven universal model that allows superior segmentation of cardiac structures with state-of-the-art performance. Our model incorporates a prompt learning with prompt pool and pre-trained language model’s knowledge by pixel-text alignment. We first employ a prompt pool-based prompt learning approach enables the development of a universal model capable of handling various scan view data. Accordingly, it effectively tackles the aforementioned issue by enabling dynamic adaptation to diverse inputs. Second, The score maps facilitates pixel-text alignment, which allows our model to fully leverage language information for medical segmentation tasks. It effectively address the above problem by enabling input video to select particular view specific prompt and focus on particular semantic features with language model. To the best of our knowledge, this is the first work that approaches unified model segmentation in echocardiography. Our method simplifies current cardiac analysis by removing the necessity for a view identification step to acquire the desired view from various DICOM scans of the patient. Our method is evaluated on three standard views from three different datasets and show promising performance compared to other universal methods.
Our contributions can be summarized as follows:
• We present a prompt-driven universal model, comprising a prompt pool to accommodate different standard views, and leveraging pixel-text alignment with the prior knowledge of a pre-trained text model for view-agnostic echocardiography segmentation.
• The proposed method streamlines cardiac analysis by minimizing the requirement for a view identification step during the retrieval of the desired view from patient scans.
• We demonstrate that our model achieves SOTA performance for cardiac segmentation tasks compared to the previous universal approach through extensive experiments on various datasets.

Refer to caption
Figure 1: Over all framework of our proposed universal model for view-agnostic segmentation. The pre-trained model and query model remain frozen, while the other models are trainable.

2 Method

As illustrated in Figure 1, our proposed approach comprises the following components: text, video encoder, a prompt pool consisting of trainable key and value, an MLP layer, and a video decoder. We utilize a clinicalBert [1] for enhanced extraction of medical text representations. Our goal is to segment objects in all frames across different scan views. In order to achieve this goal, we introduce two key components in our model: 1) a pixel-text dense alignment mechanism that effectively bridges the gap between a pre-trained language model and pixel-level representations for dense prediction tasks, and 2) a prompt matching technique that leverages a prompt pool to adaptively select the optimal view-specific prompt for each task.

2.1 Problem Definition

Given total number of N datasets D = {D1,D2,,DN}\{D_{1},D_{2},\ldots,D_{N}\}, each dataset Di={Xij,Yij}j=1niD_{i}=\left\{X_{ij},Y_{ij}\right\}_{j=1}^{n_{i}}, XijX_{ij} and YijY_{ij} are the video with total frame of F{F} and the corresponding ground truth, represents total of nin_{i} pixels. Each videos XijVkX_{ij}\in V_{k} , V with total number of K views V{V1,V2,,VK}V\in\{V_{1},V_{2},\ldots,V_{K}\}. If there are F\forall\textit{F} annotated in YijY_{ij}, DiD_{i} is fully labeled dataset; otherwise DiD_{i} is a partially labeled dataset. The objective is to train a model F(·) using the partially labeled dataset DiD_{i} = {D1,D2,,DN}\{D_{1},D_{2},\ldots,D_{N}\}, where the model is capable of making dense predictions for all K classes along all frame F{F}.

2.2 Pixel-text dense alignment

In the computer vision field, a series of works on vision and language models (VLM) have emerged. Further in medical domain, previous studies have successfully adapted CLIP embeddings for medical applications [18, 13]. However, the use of CLIP trained on natural image-text pairs weakens the semantic meaning embedding for medical prompt into the models as shown in Table 3. To fully leverage the knowledge encoded in the pre-trained medical language model, we utilized ClinicalBert [1] for dense prediction. We generate text embeddings by converting NN classes into text prompts using a template of “An echocardiography of [Classes].” to generate text embeddings (c)N×D\mathcal{F}(c)\in\mathbb{R}^{N\times D}. An input video is encoded through the backbone video encoder to embed intermediate local video embedding 𝒢(x)TiHiWi×D\mathcal{G}(x)\in\mathbb{R}^{T_{i}H_{i}W_{i}\times D}, i = 1, …, L where TiT_{i}, HiH_{i}, and WiW_{i} are the frame, height and width of the local embeddings from i-th layer and D indicates embedding dimension. Then we compute the score maps with pixel-text alignment using the text embedding and the vision embedding by:

𝒮=𝒢(x)¯(c)T¯\mathcal{S}=\bar{\mathcal{G}(x)}\bar{\mathcal{F}(c)^{T}}

where superscript - refers the normalization along channel dimension and T denotes the transpose operation. The score map 𝒮\mathcal{S} can be employed for auxiliary segmentation at a lower resolution denoted as pixel-text loss. We concatenate pixel-text score map 𝒮\mathcal{S} to local embeddings f{f} to incorporate text priors. We utilized the chamber class for the text encoder without incorporating any view information.

2.3 Prompt matching and text-driven parameter generation

Given an 2D-t input xT×H×W×Cx\in\mathbb{R}^{T\times H\times W\times C} and 𝔔\mathfrak{Q} is a pre-trained Vision Transformer(ViT) of Segment Anything Model [10], first frame of video is divided into patches and then embed as patch embeddigs 𝔔:L×(S2×C)L×D\mathfrak{Q}:\mathbb{R}^{L\times(S^{2}\times C)}\rightarrow\mathbb{R}^{L\times D} where S indicates the patch size and C denotes input channels, and D is embedding dimension. The prompt pool consist of two learnable key {(k1,P1),(k2,P2),,(kM,PM)}\left\{(k_{1},P_{1}),(k_{2},P_{2}),...,(k_{M},P_{M})\right\}, where ki𝔻k_{i}\in\mathbb{R^{D}} and learnable value {P1,P2,,PM}\left\{P_{1},P_{2},...,P_{M}\right\}, where Pi𝕃×𝔻P_{i}\in\mathbb{R^{L\times D}}. In our settings, the total number of entries in the prompt pool, denoted by MM, is equal to the number of views multiplied by a pre-defined prompt size assigned to each view, which is set to 3. The queried input embeddings and prompt keys for each views are drawn towards each other to maximize their cosine distance in training steps, denoted as pr\mathcal{L}_{pr}. We use a global average pooling (GAP) layer on the last encoder features to obtain a global representation of the current video input. Then we utilize the text embeddings together with the prompt values and global embedding to generate parameters for the chamber segmentation heads, θN\theta_{N}. These parameters used in video decoder heads and generate binary prediction for N{N} classess [21]. This design facilitates view-agnostic prompt and preserving a view information during test time.

Table 1: Details of the publicly available datasets for training and evaluation.
Dataset Scan View Annotation #Total Scan (train/test)
CAMUS [12] A2C LVendo, LVepi 500 (450/50)
A4C LVendo, LVepi 500 (450/50)
EchoNet-Pediatric [19] A4C LVendo 3284 (2580/704)
PSAX LVendo 4526 (3559/967)
EchoNet-Dynamic [16] A4C LVendo 10036 (8753/1277)

2.4 Loss Function

2.4.1 Video Masked Back-propagation

In our problem, the labels are distributed extremely sparsely across the frames which is difference from previous work [13]. In this work, we designed the video masked back propagation to address the partially labeled issue. Specifically, we masked frames that does not have class label and only back propagate loss to update parameter in our network. In this way, we can utilize sparsely labeled problem and do accurate segmentation in video with partially labeled dataset.

2.4.2 Total loss

Our objective is to achieve segmentation by minimizing two loss terms including a prompt matching loss and segmentation loss with masked backpropagation through optimization of the following loss function:

seg=λ1pixel-text+λ2BCE,pr=<𝔔(Xi0),Pkey>\mathcal{L}_{\text{seg}}=\lambda_{1}\mathcal{L}_{\text{pixel-text}}+\lambda_{2}\mathcal{L}_{\text{BCE}},\quad\mathcal{L}_{\text{pr}}=<\mathfrak{Q}(X_{i0}),P_{key}>
total=(1λ(t))segλ(t)pr\mathcal{L}_{\text{total}}=(1-\lambda(t))\mathcal{L}_{\text{seg}}-\lambda(t)\mathcal{L}_{\text{pr}}

where seg\mathcal{L}_{seg} denotes the segmentation loss combining different two losses, pixeltext\mathcal{L}_{pixel-text} represents CE loss with score maps, and BCE\mathcal{L}_{BCE} represents the binary cross-entropy loss, respectively. Throughout the experiments, λ1\lambda_{1} and λ2\lambda_{2} are set equally. pr\mathcal{L}_{pr} denotes the cosine similarity between queried input and prompt keys assigned by view types during training steps. λ\lambda is scheduled by the time dependent ramp up Gaussian function λ(t)=exp5(1t/tmax)2\lambda(t)=\exp^{-5(1-t/t_{max})^{2}} where t is the current iteration and tmaxt_{max} is the maximal iteration. Since our prompt keys are converged in the early stage, we reduce the weights on prompt matching loss during this initial phase.

Table 2: Quantitative comparison across all datasets. The Dice scores [%] are presented, with the best results highlighted in bold.
Method A2C A4C PSAX Indiv A4C perf.
LVendo{}_{\text{endo}} LVepi{}_{\text{epi}} LVendo{}_{\text{endo}} LVepi{}_{\text{epi}} LVendo{}_{\text{endo}} Camus Pedia Dynamic
View-specific
SwinUNETR [5] 90.5 86.7 85.9 86.7 87.7 90.4 83.1 84.2
U-Transformer [17] 93.3 88.1 88.3 88.5 88.1 92.8 86.4 85.9
View-integrated
SwinUNETR [5] 88.8 83.1 82.9 84.7 85.1 87.3 80.4 81.1
U-Transformer [17] 89.3 83.6 85.7 84.1 85.4 87.5 84.8 84.9
DoDNet [23] 90.8 87.1 84.9 85.3 87.2 87.9 85.3 81.5
Clip-driven [13] 91.1 87.6 85.1 86.4 88.5 89.1 82.4 83.9
UniSeg [22] 92.3 86.5 87.7 86.3 88.3 93.1 84.9 85.1
UniverSeg [2] 83.3 80.7 82.1 81.8 81.0 83.2 80.3 82.9
Ours 93.2 88.8 88.5 88.3 89.4 93.7 85.3 86.6
Refer to caption
Figure 2: Qualitative visualization of segmentation results generated from our method and state-of-the-art universal methods on representative image. Red and blue represents LVendo and LVepi, respectively.

3 Experiments and Results

Materials. We evaluated the proposed method using three publicly available datasets [12][19][16]. These datasets consist of 2D B-mode scan data annotated different cardiac chambers at end-diastole (ED) and end-systole (ES). The annotation includes the left ventricle endocardium (LVendo{}_{\text{endo}}) and the left ventricle epicardium (LVepi{}_{\text{epi}}) in apical two-chamber (A2C), apical four-chamber (A4C), and parasternal short-axis (PSAX) views. We followed a predefined split set as shown in Table 1.
Implementation and Evaluation Metric To guarantee a fair comparison across all experiments, we standardized the training and testing settings. The experiments were conducted using PyTorch, with a consistent batch size of 5 over 100 epochs with Nvidia A100. We utilized the Unet architecture [20] as the backbone to incorporate our key components. For optimization, we employed the MADGRAD optimizer [4], setting the learning rate to 1e-4. Images are resized to 224×\times224 pixels with 16 frames and normalized to ensure a zero mean and unit variance. To enhance the robustness of our model, we apply various augmentation technique, including random flip, rotation within a range of -30 to +30, and shearing along x-y dimensions. We utilize the Dice Similarity Coefficient (DSC) for the evaluation of our model’s performance. We compared our method across three scan views: A4C, A2C, and PSAX. These scan views were selected based on the currently available echo dataset. The performance of the models was evaluated on ED and ES cardiac phases, where annotations were available.
Comparison study We present the performance of the proposed method for cardiac segmentation across different scan views. To the best of our knowledge, our method is the first approach capable of performing cardiac segmentation with view-agnostic input that can perform universal segmentation. We compared our proposed method under two different settings: 1) trained and tested on the same views noted as view-specific model approach, and 2) trained on all views and tested on all views noted as view-integrated model approach. We chose two methods, SwinUNETR [5] and U-transformer [17] for baseline segmentation model based on previous study [7]. We also compare the performance of proposed method against universal models, including DoDNet [19], CLIP-driven universal model [13], UniSeg [22], and UniverSeg [2] for cardiac segmentation tasks on three datasets. The CLIP-driven universal model [13] substitutes one-hot embeddings [23] with CLIP text embeddings. UniSeg [22] employs a learnable prompt for generating task-specific embeddings. Additionally, our comparision includes a few-shot based universal segmentation model called UniverSeg [2].
As depicted in Table 2, our method exhibits the capability to generate excellent segmentation results even with view-agnostic input, as illustrated in Fig. 2. In comparison to baseline models with view-integrated approaches, our model demonstrates better performance compared to the view-specific based U-transfomrer except on LVendo (93.2 vs. 93.3) and LVepi (88.3 vs. 88.5) in A2C and A4C, respectively. Moreover, by incorporating prompts to adaptively integrate input view information, our model outperforms all universal models in delineating regions of interest (ROI) across all views. Since, our model adapts the prompt to match input view types, resulting in enhanced segmentation results and superior performance. Furthermore, our method improves mean segmentation performance compared to employing a few-shot segmentation method (89.64 vs 81.7). These experimental findings demonstrate that our approach effectively leverages adaptable prompts to produce superior segmentation results.
Ablation study To evaluate the effectiveness of the each component, we conducted an ablation study to quantify the impact of different elements in our model on segmentation performance. First, we evaluated performance metrics without text-encoder path, and then with text-encoder path employing various designs including one-hot encoding, as well as leveraging prior knowledge from language models including CLIP and ClinicalBert. In the absence of text-encoder path, which removes pixel-text alignment for auxiliary loss, we observed a degradation in performance (89.6 to 85.6) showing that an pixel-text alignment is crucial for our model. Additionally, we compared different types of text encoders trained on both natural and medical text. The findings indicate that the CLIP language model is not as effective in representing medical text compared to the ClinicalBert (88.8 vs. 89.6). Secondly, we assessed the classification performance based on selected prompt keys. We examined the assigned prompt keys with majority voting determined by argmaxGA,B,Ci=13𝕀(xiG)\arg\max_{G\in{A,B,C}}\sum_{i=1}^{3}\mathbb{I}(x_{i}\in G). T-SNE visualization [14] is presented in Figure 3. We observed that the accuracy for distinguishing between apical views and parasternal views was 0.96. However, the accuracy for distinguishing between A2C and A4C was 0.54 and 0.6, respectively. Secondly, there is variability between view classes annotated by human readers due to ambiguous scan angles between A2C and A4C. In fact, the probe, guided by manual manipulation, often fails to accurately capture both the A2C and A4C angles when rotated from a single position. This is evident in Table 4, where providing view information instead of selecting keys from the prompt pool resulted in a model performance of 89.4, lower than the performance of 89.6 achieved when view information was not provided.

Encoder design Dice [%]
w/o text-encoder 85.6
w/ One-hot 88.6
     CLIP 88.8
     ClinicalBert 89.6
Table 3: Comparison of mean model performance on different encoder design.
View Dice [%]
w/ 89.4
w/o 89.6
Table 4: Comparison of model performance with and without explicit view information.
Refer to caption
Figure 3: T-SNE visualization of prompt keys.

4 Conclusion

In this study, we introduce an innovative prompt-driven universal echocardiography segmentation model that is capable of learning cardiac segmentation across different standard views using partially labeled data. This model incorporates the knowledge of pre-trained language models by aligning text representation with visual pixel data. We also suggest a prompt matching technique through a prompt pool to achieve view-agnostic echocardiography segmentation. Our research utilized three standard views to demonstrate the feasibility of our proposed model, highlighting its potential to be expanded to additional standard views to create a universal model for echocardiography. This method simplifies the process by eliminating the need for a separate step to identify views, thereby reducing the variability introduced by humans in selecting views for analysis in a patient’s study. Extensive experiments on the echocardiography segmentation benchmark across various scan views have shown that our approach not only performs superiorly but also proves to be highly effective.

References

  • [1] Alsentzer, E., Murphy, J.R., Boag, W., Weng, W.H., Jin, D., Naumann, T., McDermott, M.: Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323 (2019)
  • [2] Butoi, V.I., Ortiz, J.J.G., Ma, T., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Universeg: Universal medical image segmentation. arXiv preprint arXiv:2304.06131 (2023)
  • [3] Charton, J., Ren, H., Kim, S., Gonzalez, C.M., Khambhati, J., Cheng, J., DeFrancesco, J., Waheed, A., Marciniak, S., Moura, F., et al.: Multi-task learning for hierarchically-structured images: Study on echocardiogram view classification. In: International Workshop on Advances in Simplifying Medical Ultrasound. pp. 185–194. Springer (2023)
  • [4] Defazio, A., Jelassi, S.: Adaptivity without compromise: a momentumized, adaptive, dual averaged gradient method for stochastic optimization. The Journal of Machine Learning Research 23(1), 6429–6462 (2022)
  • [5] Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H.R., Xu, D.: Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images. In: International MICCAI Brainlesion Workshop. pp. 272–284. Springer (2021)
  • [6] Jeon, J., Ha, S., Jang, Y., Yoon, Y.E., Kim, J., Jeong, H., Jeong, D., Hong, Y., Chang, S.A.L.H.J.: Improving out-of-distribution detection in echocardiographic view classication through enhancing semantic features (2023)
  • [7] Kim, S., Kim, K., Hu, J., Chen, C., Lyu, Z., Hui, R., Kim, S., Liu, Z., Zhong, A., Li, X., et al.: Medivista-sam: Zero-shot medical video analysis with spatio-temporal sam adaptation. arXiv preprint arXiv:2309.13539 (2023)
  • [8] Kim, S., Park, H.B., Jeon, J., Arsanjani, R., Heo, R., Lee, S.E., Moon, I., Yoo, S.K., Chang, H.J.: Fully automated quantification of cardiac chamber and function assessment in 2-d echocardiography: clinical feasibility of deep learning-based algorithms. The International Journal of Cardiovascular Imaging 38(5), 1047–1059 (2022)
  • [9] Kim, T., Hedayat, M., Vaitkus, V.V., Belohlavek, M., Krishnamurthy, V., Borazjani, I.: Automatic segmentation of the left ventricle in echocardiographic images using convolutional neural networks. Quantitative Imaging in Medicine and Surgery 11(5),  1763 (2021)
  • [10] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.Y., et al.: Segment anything. arXiv preprint arXiv:2304.02643 (2023)
  • [11] Leclerc, S., Smistad, E., Østvik, A., Cervenansky, F., Espinosa, F., Espeland, T., Berg, E.A.R., Belhamissi, M., Israilov, S., Grenier, T., et al.: Lu-net: a multistage attention network to improve the robustness of segmentation of left ventricular structures in 2-d echocardiography. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 67(12), 2519–2530 (2020)
  • [12] Leclerc, S., Smistad, E., Pedrosa, J., Østvik, A., Cervenansky, F., Espinosa, F., Espeland, T., Berg, E.A.R., Jodoin, P.M., Grenier, T., et al.: Deep learning for segmentation using an open large-scale dataset in 2d echocardiography. IEEE transactions on medical imaging 38(9), 2198–2210 (2019)
  • [13] Liu, J., Zhang, Y., Chen, J.N., Xiao, J., Lu, Y., A Landman, B., Yuan, Y., Yuille, A., Tang, Y., Zhou, Z.: Clip-driven universal model for organ segmentation and tumor detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 21152–21164 (2023)
  • [14] Van der Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008)
  • [15] Mitchell, C., Rahko, P.S., Blauwet, L.A., Canaday, B., Finstuen, J.A., Foster, M.C., Horton, K., Ogunyankin, K.O., Palma, R.A., Velazquez, E.J.: Guidelines for performing a comprehensive transthoracic echocardiographic examination in adults: recommendations from the american society of echocardiography. Journal of the American Society of Echocardiography 32(1), 1–64 (2019)
  • [16] Ouyang, D., He, B., Ghorbani, A., Yuan, N., Ebinger, J., Langlotz, C.P., Heidenreich, P.A., Harrington, R.A., Liang, D.H., Ashley, E.A., et al.: Video-based ai for beat-to-beat assessment of cardiac function. Nature 580(7802), 252–256 (2020)
  • [17] Petit, O., Thome, N., Rambour, C., Themyr, L., Collins, T., Soler, L.: U-net transformer: Self and cross attention for medical image segmentation. In: Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 12. pp. 267–276. Springer (2021)
  • [18] Qin, Z., Yi, H., Lao, Q., Li, K.: Medical image understanding with pretrained vision language models: A comprehensive study. arXiv preprint arXiv:2209.15517 (2022)
  • [19] Reddy, C.D., Lopez, L., Ouyang, D., Zou, J.Y., He, B.: Video-based deep learning for automated assessment of left ventricular ejection fraction in pediatric patients. Journal of the American Society of Echocardiography 36(5), 482–489 (2023)
  • [20] Ronneberger, O., Fischer, P., Brox, T.: U-net: c networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. pp. 234–241. Springer (2015)
  • [21] Tian, Z., Shen, C., Chen, H.: Conditional convolutions for instance segmentation. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. pp. 282–298. Springer (2020)
  • [22] Ye, Y., Xie, Y., Zhang, J., Chen, Z., Xia, Y.: Uniseg: A prompt-driven universal segmentation model as well as a strong representation learner. arXiv preprint arXiv:2304.03493 (2023)
  • [23] Zhang, J., Xie, Y., Xia, Y., Shen, C.: Dodnet: Learning to segment multi-organ and tumors from multiple partially labeled datasets. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 1195–1204 (2021)