CycleGAN-Enabled Robust Bubble Sizing and Shape Reconstruction in Dense Plumes

Abstract

The application of machine learning techniques has significantly advanced particle and bubble detection and segmentation in fluid mechanics. However, most existing deep-learning-based segmentation methods rely on large amounts of high-quality pixel-level annotations, which are costly and time-consuming to obtain, especially for complex bubbly flows. In this work, we propose a fully unsupervised bubble segmentation framework based on Cycle-Consistent Generative Adversarial Networks (CycleGAN), which learns a direct mapping from bubble images to segmentation masks using unpaired training data. Specifically, the model is trained on two independent sets consisting of experimental or synthetic bubbly flow images and corresponding bubble masks generated from the BubGAN dataset, without requiring any paired image–mask annotations. By enforcing cycle consistency and adversarial constraints, the proposed Bubble CycleGAN effectively captures the geometric characteristics of bubbles and produces accurate segmentation masks from input images, even under varying experimental conditions. The performance of the method is evaluated using both BubGAN data and labeled experimental datasets. The proposed approach substantially reduces the need for manual labeling and provides a practical solution for large-scale bubble segmentation. Furthermore, the framework is not limited to bubbly flows and can be readily extended to other particle-laden flow systems.

Methodology

To achieve bubble image segmentation without manual annotations, we adopt a Cycle-Consistent Generative Adversarial Network (CycleGAN) framework. CycleGAN enables image-to-image translation between two different domains using unpaired data, making it particularly suitable for scenarios where pixel-level labels are difficult or expensive to obtain. In this work, two domains are defined: the bubble image domain and the bubble mask domain. Two generators are trained simultaneously to learn bidirectional mappings between these domains: one maps bubble images to segmentation masks, while the other maps masks back to images. Corresponding discriminators are used to enforce adversarial learning, ensuring that the generated outputs are indistinguishable from real samples in each domain. To preserve geometric consistency, a cycle-consistency constraint is imposed, which encourages the reconstructed images or masks to remain close to the original inputs after a forward–backward translation. By jointly optimizing adversarial loss and cycle-consistency loss, the proposed framework learns to generate accurate bubble segmentation masks directly from input images without requiring paired training data.

Figure 1. Overview of Bub-CycleGAN framework with respect to calculating (a) adversarial loss and (b) cycle-consistent loss.

BubGAN data validation

To ensure the reliability of the training data, the proposed framework is built upon the BubGAN dataset, which provides physically consistent synthetic bubble images with well-controlled geometric properties. Bubble images are directly obtained from the BubGAN dataset, while the corresponding bubble masks are generated through post-processing based on the known geometric information. In the mask images, each bubble is represented by a white interior with a red boundary, and the background is set to black. As shown in Fig. 5, the masks generated by the trained model closely match the reference masks. This enables accurate extraction of bubble size and related geometric features.

Figure 2. Examples of paired images and corresponding masks at various void fractions (e.g., 0.02, 0.04, 0.06, 0.08, and 0.10).

Figure 3. The pipeline of generating single bubble masks

Figure 4. The pipeline of generating multiple bubble masks.
Figure 5. Multiple bubbles training results with different void fraction.