Multi-layer optimization of GSM/UMTS/LTE networks: cluster-level and cell-level integrated approach

Author:

Annotation: The rapid coexistence of legacy and modern radio technologies has transformed contemporary cellular infrastructure into a layered communication environment where heterogeneous systems such as GSM, UMTS, and LTE operate within a shared spectral and interference space. Under these circumstances, traditional optimization techniques that treat individual base stations or isolated network segments as independent control units increasingly reveal their limitations. Fragmented parameter tuning frequently leads to local improvements that inadvertently propagate inefficiencies elsewhere in the network topology. Addressing this challenge requires analytical frameworks capable of perceiving the radio access network as an interconnected organism whose performance emerges from the collective behavior of clusters of cells rather than from the configuration of a single node. This study proposes an integrated multi-layer optimization approach that synchronizes cluster-level network analysis with cell-level parameter adjustment within a unified feedback-driven control structure. The suggested method introduces a mathematical model that simultaneously evaluates key performance indicators, including throughput, latency, resource utilization, and user-perceived service quality, while dynamically adapting operational parameters such as transmission power, antenna configuration, and load distribution across neighboring cells. Unlike conventional optimization strategies that operate within a single analytical scale, the proposed framework interweaves macro-level traffic equilibrium with micro-level radio control decisions. To investigate the practical viability of the model, a dedicated analytical software environment was developed using Python and applied to realistic signal measurements from the Cellular Network Analysis Dataset. The experimental evaluation demonstrates that the integrated optimization process enables a more balanced redistribution of network load, improves the stability of key performance indicators, and enhances overall service quality across heterogeneous radio layers. The results indicate that coupling cluster-oriented analytics with adaptive cell-level control creates a synergistic optimization effect that cannot be achieved through traditional single-layer methods. The proposed approach contributes to the ongoing development of intelligent radio access network management by offering a structured pathway toward automated optimization of heterogeneous cellular infrastructures, particularly during transitional phases where legacy technologies coexist with newer broadband systems.

Bibliographic description of the article for the citation:

. Multi-layer optimization of GSM/UMTS/LTE networks: cluster-level and cell-level integrated approach//Science online: International Scientific e-zine - 2023. - №4. - https://nauka-online.com/en/publications/technical-sciences/2023/4/14-19/

The article was published in: Science online No4 апрель 2023

Technical sciences

УДК 621.396.7

Romaniuk Ihor

Master of telecommunications and radio engineering

National Technical University of Ukraine

“Igor Sikorsky Kyiv Polytechnic Institute”

https://www.doi.org/10.25313/2524-2695-2023-4-14-19

MULTI-LAYER OPTIMIZATION OF GSM/UMTS/LTE NETWORKS: CLUSTER-LEVEL AND CELL-LEVEL INTEGRATED APPROACH

Summary. The rapid coexistence of legacy and modern radio technologies has transformed contemporary cellular infrastructure into a layered communication environment where heterogeneous systems such as GSM, UMTS, and LTE operate within a shared spectral and interference space. Under these circumstances, traditional optimization techniques that treat individual base stations or isolated network segments as independent control units increasingly reveal their limitations. Fragmented parameter tuning frequently leads to local improvements that inadvertently propagate inefficiencies elsewhere in the network topology. Addressing this challenge requires analytical frameworks capable of perceiving the radio access network as an interconnected organism whose performance emerges from the collective behavior of clusters of cells rather than from the configuration of a single node.

This study proposes an integrated multi-layer optimization approach that synchronizes cluster-level network analysis with cell-level parameter adjustment within a unified feedback-driven control structure. The suggested method introduces a mathematical model that simultaneously evaluates key performance indicators, including throughput, latency, resource utilization, and user-perceived service quality, while dynamically adapting operational parameters such as transmission power, antenna configuration, and load distribution across neighboring cells. Unlike conventional optimization strategies that operate within a single analytical scale, the proposed framework interweaves macro-level traffic equilibrium with micro-level radio control decisions.

To investigate the practical viability of the model, a dedicated analytical software environment was developed using Python and applied to realistic signal measurements from the Cellular Network Analysis Dataset. The experimental evaluation demonstrates that the integrated optimization process enables a more balanced redistribution of network load, improves the stability of key performance indicators, and enhances overall service quality across heterogeneous radio layers. The results indicate that coupling cluster-oriented analytics with adaptive cell-level control creates a synergistic optimization effect that cannot be achieved through traditional single-layer methods.

The proposed approach contributes to the ongoing development of intelligent radio access network management by offering a structured pathway toward automated optimization of heterogeneous cellular infrastructures, particularly during transitional phases where legacy technologies coexist with newer broadband systems.

Key words: Cellular network optimization, multi-layer radio networks, GSM/UMTS/LTE integration, cluster-level optimization, cell-level parameter adaptation, radio access network management, QoE-driven optimization, heterogeneous mobile networks.

Introduction. Modern cellular infrastructures form a layered technological environment in which GSM, UMTS, and LTE systems operate simultaneously, creating a complex radio ecosystem shaped by interference, uneven traffic distribution, and dynamic mobility patterns. Within such conditions, network optimization can no longer be interpreted as a simple adjustment of isolated radio parameters; rather, it becomes a coordination problem involving interacting cells and clusters whose behavior collectively determines the perceived service quality.

In many operational scenarios, optimization procedures still examine cluster traffic conditions and cell parameters separately, which often leads to fragmented improvements and hidden performance imbalances across the network. This circumstance highlights the need for analytical approaches capable of synchronizing macro-level traffic organization with micro-level radio control. The present study therefore explores an integrated optimization perspective in which cluster-scale network behavior and cell-level parameter adaptation are treated as mutually dependent elements of a unified control process.

Recent research analysis. The body of prior research reveals a field that is rich in partial answers yet still short of a genuinely integrated optimization doctrine for heterogeneous cellular networks. Zaid and co-authors in [1] examined machine-learning-based handover decision mechanisms for cellular-connected drones and reviewed methods centered on supervised learning, deep learning, reinforcement learning, and hybrid schemes, with particular attention to high-mobility three-dimensional environments. The practical upshot of that review is not merely taxonomic: the authors show that deep reinforcement learning, especially combinations such as dueling double deep Q-network variants, is repeatedly reported as a promising route for improving handover reliability under fast signal fluctuation and dense small-cell conditions. The study matters because it demonstrates that mobility decisions can no longer be treated as static threshold logic once the network becomes spatially dynamic and multi-layered. At the same time, its unresolved gap is plain as daylight: the handover problem is treated chiefly as an aerial mobility problem rather than as one component of a broader multi-layer radio resource optimization framework. This limitation persists for objective reasons, namely the scarcity of operational datasets for aerial mobility and the extra complexity introduced by three-dimensional trajectories, and for subjective reasons, because much of the literature has been captivated by the novelty of drone use cases rather than by the harder, less glamorous task of joint optimization across legacy and modern terrestrial layers [1].

Mahamod and colleagues in [2] surveyed handover control parameter optimization for self-organizing 6G networks and concentrated on the tuning of Handover Margin and Time-to-Trigger within Mobility Robustness Optimization. Their analysis is methodologically grounded in comparing non-AI and AI-based MRO approaches, and the paper is especially useful because it links handover failures to measurable outcomes such as radio link failure, handover ping-pong, early handover, late handover, and wrong-cell handover. The authors also note concrete control ranges for TTT, from 0 ms to 5120 ms, and argue that AI-based solutions are more adaptive than classical algorithms under heterogeneous, ultra-dense, and mmWave-heavy deployments. The contribution is highly relevant because it confirms that parameter self-tuning is indispensable when network states shift too quickly for manual optimization. Yet the survey also concedes that no existing algorithm delivers an optimal solution across deployment scenarios. The unresolved issue therefore lies in the absence of a robust, adaptive mechanism capable of balancing multiple KPIs across different mobility profiles and radio layers. This remains unsolved because the optimization landscape is intrinsically multi-objective and non-stationary, while many published solutions still optimize a narrow slice of the problem, often using scenario-specific assumptions that do not travel well from one topology to another [2].

Rago and an extended research team in [3] shifted the discussion toward multi-layer terrestrial and non-terrestrial architectures for 6G, proposing integrated combinations of drones, HAPs, satellites, and terrestrial access in 3D single-connectivity and 3D multi-connectivity forms. Their method is architectural and standards-aware: they analyze 3GPP-compatible options, compare direct and indirect access modes, and frame the problem in terms of functional KPIs, service requirements, protocol stacks, and feasible integration pathways. A key measurable outcome of the study is the identification of concrete service-quality dimensions – coverage, capacity, resilience, latency sensitivity, and continuity of communications – together with the observation that multi-connectivity can improve reliability and backup capability, while simultaneously increasing management complexity and latency asymmetry. This work is valuable because it enlarges the very meaning of “multi-layer” beyond mere intra-RAN layering and shows that future optimization must span multiple spatial tiers. However, the study remains architecture-heavy and algorithm-light: it does not provide a unified operational method for translating these architectures into cluster-level and cell-level closed-loop optimization decisions. That lacuna survives because standards work naturally prioritizes feasibility and interoperability before optimization intelligence, and because cross-layer orchestration between terrestrial and non-terrestrial systems still faces unresolved governance, latency, and operator-model frictions [3].

Barakabitze and a collaborating researcher in [4] explored the evolution of next-generation communication infrastructures through the prism of QoE-oriented network softwarization, with particular attention to the roles of SDN, NFV, MEC environments, distributed cloud–edge ecosystems, and intelligent orchestration mechanisms designed to support multimedia traffic delivery. Although the work is formulated as a survey, it possesses considerable practical relevance because it consolidates several architectural paradigms for monitoring, managing, and allocating resources with explicit consideration of user-perceived service quality. The authors also summarize a set of anticipated performance indicators for future 6G environments, including peak transmission capacities approaching or exceeding 1 Tb/s, experienced user throughput in dense hotspot scenarios up to 10 Gb/s, ultra-low end-to-end latency within the range of 10-100 microseconds, support for mobility velocities near 1000 km/h, and service reliability targets close to 10-9. The importance of this investigation lies in the conceptual redirection it implicitly proposes: the effectiveness of network optimization should not be evaluated solely through conventional radio counters but also through the experiential dimension of service delivery perceived by end users. Nevertheless, a substantial research gap remains visible. While QoE-centric orchestration frameworks are discussed extensively at the level of service management and virtualized infrastructure, the concrete mechanisms by which radio-layer parameters should be dynamically adjusted remain largely undefined. Put differently, the study convincingly explains why intelligent orchestration frameworks are required, yet it provides limited guidance on how operational challenges such as cluster-level congestion, excessive cell coverage overlap, cross-layer load redistribution, or coordinated spectrum refarming might be optimized simultaneously within heterogeneous GSM/UMTS/LTE deployments. This unresolved aspect likely persists because communities focusing on service orchestration and those specializing in radio-access optimization frequently develop their solutions in parallel research streams rather than through tightly integrated methodological frameworks [4].

Ahmed and collaborators in [5] offered one of the most concrete predictive studies among the selected sources by proposing multivariate BTS power-failure prediction using CNN, LSTM, and hybrid CNN-LSTM models. The methodology is based on time-series learning over five months of data sampled every five minutes, transformed into multivariate tensors for deep predictive modeling. Their measurable results are strikingly explicit: the LSTM achieved MSE = 0.001 and RMSE = 0.037 with the best overall predictive quality, the CNN-LSTM also reached MSE = 0.001 with RMSE = 0.038, while the CNN lagged behind at MSE = 0.0223 and RMSE = 0.472; the CNN trained in under one hour, the LSTM in roughly three and a half hours, and the hybrid model in about two hours. The value of this study lies in showing that infrastructure-aware prediction can materially improve resilience, maintenance timing, and service continuity. Still, the unresolved issue is transferability: the authors themselves note the need for broader datasets, real-time O&M data, robustness testing across sites, and cost-effectiveness analysis. The reason this remains unresolved is both objective and mundane – telecom operational data are fragmented, noisy, and commercially sensitive – and methodological, since predictive success on one BTS subsystem does not automatically translate into holistic radio-network optimization [5].

Qadir and fellow researchers in [6] painted a broad 6G-IoT canvas, comparing generational capabilities and spelling out future requirements. Their method is an analytical survey of enabling technologies, use cases, and performance requirements for smart cities, AI-enabled IoT, edge computing, and advanced radio systems. The measurable outcomes they compile are useful benchmarks: 5G is framed by 20 Gbps peak data rate, 1 ms latency, mobility up to 500 km/h, and one million devices per square kilometer, whereas 6G is projected toward approximately 1 Tbps data rate, 1 ms or even microsecond-level latency, 100 bps/Hz spectral efficiency, mobility up to 1000 km/h, and full satellite integration. The study is relevant because it demonstrates that future optimization targets will be more unforgiving, more densely coupled, and more automation-hungry than those of previous generations. Yet its unresolved issue is that the paper remains intentionally panoramic; it does not descend into the gritty mechanics of KPI-driven control loops for legacy-to-LTE coexistence or for inter-layer refarming in live operator networks. The gap persists because visionary surveys are designed to set direction, not to solve parameterized optimization problems under operational constraints [6].

Damsgaard and associates in [7] surveyed approximate computing in B5G and 6G systems, focusing on cases where constrained quality degradation can yield computational or energy savings. The method consists in mapping algorithmic and system-level approximation techniques to network problems such as resource allocation and intelligent reflective surfaces. Their notable result is the recognition that many such optimization problems are NP-hard and therefore frequently addressed via heuristics, Successive Convex Approximation, or Reinforcement Learning. The significance of this study is subtle but important: it reminds us that future network optimization cannot be separated from computational tractability and energy efficiency. Even so, the unresolved question is how approximation strategies should be embedded into operational radio optimization without jeopardizing service assurance in mixed-technology networks. This remains unanswered because approximate computing literature is still more mature in algorithmic abstraction than in vendor-grade RAN control deployment, and because operators are understandably cautious about trading deterministic behavior for computational thrift [7].

Gallego-Madrid and co-workers in [8] reviewed machine-learning-based zero-touch network and service management and dissected the emergence of ZSM reference architectures, international standardization efforts, and ML-enabled orchestration functions. Their methods include architectural comparison and classification of ML applications across multi-tenancy management, traffic monitoring, and architecture coordination. The paper’s key result is not a single benchmark number but a systems-level finding: the ZSM paradigm is undergoing rapid growth, standardization bodies have already produced reference architectures, and ML is becoming a central mechanism for autonomous network control. This study is meaningful because it furnishes the governance and automation backdrop against which any serious multi-layer optimization framework must operate. Yet the unresolved issue is that ZSM remains broad-brush; it says much about orchestration logic and comparatively less about how radio KPIs should trigger concrete per-cell actions such as tilt retuning, power adaptation, neighbor cleanup, or refarming decisions. That gap survives because end-to-end autonomy is easier to sketch as an architecture than to implement as a stable closed-loop controller across heterogeneous radio domains [8].

Nassef and collaborators in [9] examined distributed machine learning for 5G and beyond, focusing on communication optimization, computation optimization, resource distribution, privacy, and security. The study is methodologically comparative, reviewing federated and split learning styles as well as compute frameworks and privacy-preserving mechanisms. One of its measurable conclusions is that future 5G-and-beyond systems will demand delays in the order of microseconds for near-real-time applications, while any viable distributed ML design must balance communication cost, computational overhead, privacy, and model accuracy. The importance of this contribution lies in its insistence that intelligence for future networks must itself be network-aware; the learning system cannot be divorced from the communication substrate it depends on. Yet unresolved issues abound: cross-optimization of heterogeneous resources remains open, privacy and security inject heavy overhead, and the trade-off between model performance and deployment efficiency has not been settled. These questions remain open because distributed intelligence introduces a second optimization problem on top of the radio problem itself [9].

Oughton and several co-authors in [10] reviewed wireless broadband technologies in what they call the peak smartphone era, comparing 6G against Wi-Fi 7 and Wi-Fi 8 and drawing policy consequences from slowing traffic growth. Their method is comparative techno-economic analysis, linking engineering capabilities to spectrum policy and lifecycle assessment. The paper reports a cluster of measurable indicators: emerging 6G KPIs are expected to include peak rates near 1 Tbps, latency of 0.1-1 ms, mobility of 500-1000 km/h, and reliability between 10^-5 and 10^-7, while Wi-Fi 7 is described as exceeding 40 Gbps theoretical peak throughput and relying on features such as 320 MHz channels and multi-link operation. The study is germane because it broadens the optimization discussion from internal RAN mechanics to the wider ecosystem of complementary wireless technologies and spectrum governance. Still, the unresolved issue is that the paper identifies the need for new usage datasets and revised spectrum assumptions rather than solving operator-level multi-layer optimization. This incompleteness is not a flaw so much as a boundary condition: policy-oriented reviews illuminate the stage, but they do not direct the play [10].

Ray in [11] addressed 6G for space-air-ground integrated networks and proposed a layered vision in which satellites, aerial platforms, and terrestrial elements cooperate under a heterogeneous service model. The study uses review-based synthesis of enabling technologies, layered architectural principles, and open challenges. It provides several measurable technological reference points: envisaged 6G-SAGIN characteristics include peak data rates above 1 Tbps, mobility support beyond 1200 km/h, and reliability of 99.99999%; the cited space access network can deliver roughly 506 Mbps over 100-6000 km, medium-earth-orbit links can reach about 1 Gbps, deep-space optical communication is described as improving communication performance by up to 100 times with an average data rate of 292 Kbps, and optical payload approaches are associated with about 50 Mbps. The study matters because it articulates the layered logic of future integrated networks and explicitly calls for SON-style self-optimization under heterogeneous conditions. Yet the unresolved issue is that the proposed vision still lacks a grounded operational bridge between layer-aware architecture and KPI-driven radio control in mixed terrestrial cellular systems. This persists because SAGIN research is still in a formative stage, where architecture, coverage, and interoperability dominate the agenda more than day-to-day optimization routines [11].

Taken together, these studies leave a discernible and rather stubborn residue of unsolved questions. What remains unresolved is not whether AI, SON, distributed learning, QoE orchestration, non-terrestrial integration, or predictive maintenance are useful; the literature has already made that abundantly clear. The unresolved problem is that these strands have not yet been woven into a single operational framework that jointly optimizes heterogeneous mobile networks across layers, timescales, and control granularity. Existing works either optimize mobility without full resource integration, or discuss architecture without executable control logic, or model service quality without tying it to radio actuation, or solve narrow prediction tasks without embedding them into end-to-end network optimization. The reasons are both objective and subjective: objective, because real operator data are difficult to obtain, heterogeneous networks are intrinsically non-convex and multi-objective, and standards evolve more slowly than academic prototypes; subjective, because many studies favor fashionable subproblems – drones, 6G visioning, zero-touch slogans, NTN enthusiasm – over the less theatrical but more consequential task of building integrated optimization loops for legacy and modern radio layers alike.

Hence, the general unsolved problem may be formulated as follows: despite substantial progress in handover intelligence, autonomous orchestration, QoE-aware control, predictive analytics, distributed learning, and multi-tier architecture design, there is still no sufficiently formalized and experimentally grounded approach for integrated multi-layer optimization of heterogeneous cellular networks that would couple cluster-level KPI assessment with cell-level parameter adaptation across interacting radio technologies and deployment layers. From this unresolved problem, the research purpose follows in a straight line: to develop an integrated optimization method for multi-layer GSM/UMTS/LTE networks that unifies macro-level cluster analysis and micro-level cell tuning within one feedback-driven framework, so that improvements in throughput, latency, handover stability, resource utilization, and service continuity are achieved not in isolation, but as parts of one coordinated optimization process.

Presentation of the main research results. Within the conceptual framework of the proposed study, the cellular radio network is interpreted not merely as a set of isolated base stations but as a hierarchically coupled system consisting of interacting clusters of cells and individual radio cells, each influencing the global quality-of-service landscape through shared spectrum, interference propagation, and load redistribution. In contrast to conventional optimization procedures that examine either cluster-level traffic conditions or cell-level radio parameters in isolation, the developed mathematical formulation introduces an integrated optimization loop in which macro-scale and micro-scale control variables co-evolve through iterative feedback. The novelty of the approach resides in the simultaneous consideration of cluster congestion dynamics and individual cell parameter adaptation within a single objective framework, thereby allowing the optimization mechanism to respond to emergent network imbalances that would otherwise remain concealed within traditional single-layer control strategies.

Let  denote the set of cells belonging to a cluster , while represents the vector of performance indicators observed for cell , including throughput , latency , packet loss , and resource utilization . The integrated network performance function is formulated as a weighted aggregate reflecting the collective service state of the cluster:

where coefficients  regulate the relative influence of throughput maximization and degradation penalties. This expression represents the cluster-level performance potential, which serves as the supervisory signal guiding global optimization.

At the finer granularity of individual cells, adaptive control parameters describe the operational state of the radio node, incorporating antenna tilt, transmit power, neighbor relations, and load balancing coefficients. The adjustment of these parameters is governed by a feedback-driven rule designed to gradually steer the system toward improved cluster performance:

where denotes the adaptation coefficient and  expresses the sensitivity of the cluster objective with respect to local parameter variation. This formulation effectively binds local cell behavior to the macroscopic condition of the surrounding cluster, thereby preventing optimization actions that would locally improve a single cell while inadvertently deteriorating neighboring performance.

To quantify the final user-perceived outcome, the model further introduces an aggregated network quality indicator defined as

where  is a small stabilizing constant preventing singularities in low-latency regimes. Unlike many earlier formulations that treat radio indicators independently, this representation captures the intertwined nature of throughput and delay, providing a compact yet expressive measure of network service experience.

The scientific originality of the proposed model lies in the introduction of a bi-level optimization structure that intertwines cluster-scale traffic equilibrium with cell-scale parameter self-adjustment through a unified objective function and feedback gradient mechanism. By doing so, the model transforms what is traditionally a sequence of loosely connected optimization tasks into a cohesive dynamical system capable of continuously redistributing load, mitigating congestion, and enhancing overall network responsiveness in heterogeneous GSM/UMTS/LTE environments. In essence, the network ceases to behave as a collection of separately tuned radio nodes and instead begins to resemble a coordinated adaptive organism whose internal parameters evolve in response to the collective operational state of the entire cluster.

In order to substantiate and empirically probe the validity of the proposed mathematical formulation, a dedicated experimental software instrument was developed. The program constitutes a specialized analytical environment designed to reproduce the operational behavior of heterogeneous cellular radio networks and to emulate the iterative optimization procedure described by the mathematical model. In essence, the developed software system acts as a computational testbed in which cluster-level traffic dynamics and cell-level radio parameter adjustments can be examined in a tightly coupled feedback loop. The conceptual structure of the implemented solution is illustrated in Fig. 1, which schematically depicts the architecture of the developed analytical framework.

Fig. 1. Architecture of the software framework for integrated cluster-level and cell-level optimization of GSM/UMTS/LTE networks

Source: developed by the author

The implemented program was written in the Python programming language and executed within the Visual Studio Code development environment. The architecture of the software system follows a modular analytical pipeline that sequentially processes network measurements, derives cluster-level performance indicators, and subsequently performs parameter adjustment at the level of individual radio cells. From a structural perspective, the system consists of several logically interrelated modules responsible for data ingestion, preprocessing, KPI extraction, cluster analysis, optimization control, and visualization of analytical outputs. The initial stage of program execution involves the loading of measurement data and the transformation of raw signal observations into structured performance indicators representing the operational state of the network. These indicators include throughput, latency, radio signal strength, and additional metrics that collectively describe the functional behavior of individual radio cells.

Following the preprocessing phase, the system performs cluster-level analytical evaluation. Within this stage, cells are grouped according to spatial or performance similarity, forming operational clusters that represent macro-scale fragments of the network topology. The cluster analysis module evaluates aggregated performance indicators and identifies regions of potential congestion, spectral imbalance, or uneven traffic distribution. Instead of treating each cell as an isolated optimization target, the program interprets clusters as emergent network subsystems whose internal behavior influences the overall quality of service. This interpretation allows the optimization mechanism to capture phenomena such as load spillover, interference propagation, and resource competition between neighboring cells.

The subsequent stage of the program involves cell-level adaptive adjustment. Here the algorithm iteratively modifies the operational parameters associated with each radio node, including transmission power coefficients, load balancing factors, and logical neighbor relations. The adjustment process is not performed arbitrarily but is guided by the global performance function derived from the mathematical model. In practical terms, the program evaluates the sensitivity of cluster performance to parameter changes at the level of individual cells and gradually updates the corresponding variables. Through this iterative process the system attempts to converge toward a configuration in which cluster-wide performance indicators demonstrate improved equilibrium while preventing the deterioration of neighboring cells.

A distinctive aspect of the developed software framework lies in the bidirectional information exchange between the macro-scale and micro-scale components of the system. Cluster analysis informs the adjustment strategy applied to individual cells, while updated cell parameters immediately influence cluster-level metrics during the next computational iteration. This cyclic interaction forms a closed analytical loop that approximates the behavior of a self-adaptive radio network. In other words, the program operationalizes the conceptual premise that cellular networks should be optimized not through isolated parameter tuning but through coordinated adjustments that consider the broader network environment.

The final stage of the program execution involves analytical interpretation and visualization of intermediate computational states. During the simulation process the program generates multiple graphical artefacts, including correlation matrices, KPI distribution diagrams, cluster priority maps, convergence curves, and comparative scenario charts. These visualizations serve as analytical instruments enabling the observation of network dynamics as the optimization process unfolds. Importantly, the visualization subsystem does not merely present static statistics but provides a chronological representation of how network indicators evolve during successive optimization iterations.

An essential element of the experimental environment is the dataset used to emulate realistic network behavior. For the purposes of the computational experiment the Cellular Network Analysis Dataset available on the Kaggle platform was employed [12]. This dataset contains signal measurements representing realistic operating conditions of contemporary mobile networks, including 3G, 4G, 5G, and LTE communication technologies. The measurements were collected using a heterogeneous experimental setup that combines DragonOS running on the Steam Deck platform with Spike LTE analysis software, a BB60C spectrum analyzer, and software-defined radio infrastructure based on the bladeRFxA9 device. In addition, the srsRan platform was used to emulate base station functionality, thereby allowing the capture of detailed radio measurements under controlled experimental conditions. The dataset includes representative signal metrics recorded across multiple locations in Bihar, India, providing a diverse set of observations that reflect different propagation environments and network states. Such measurements constitute a valuable empirical basis for evaluating the behavior of optimization algorithms because they incorporate realistic fluctuations in signal quality, interference levels, and traffic patterns. Consequently, the dataset enables the developed program to approximate real-world network conditions rather than relying on artificially generated traffic patterns [12].

Taken together, the developed software environment forms a computational laboratory in which the theoretical assumptions of the proposed optimization model can be explored in practice. By integrating cluster-level analytical reasoning with cell-level parameter adaptation and by processing realistic signal measurements, the program establishes a reproducible experimental framework for investigating the behavior of multi-layer cellular networks under dynamic optimization conditions.

The experimental evaluation of the proposed multi-layer optimization framework reveals several nontrivial behavioral patterns in the modeled radio network, particularly when cluster-scale traffic dynamics and cell-level adjustments interact through the feedback mechanism described earlier. The initial state of the network environment is illustrated in fig. 2, which depicts the statistical distribution of observed throughput across different radio technologies.

Fig. 2. Observed throughput distribution by radio technology

Source: generated using the author-developed software

As can be observed in fig. 2, the throughput characteristics of the heterogeneous radio environment display a markedly uneven structure. Legacy layers such as 3G exhibit relatively low mean throughput values, while 4G demonstrates a moderate capacity level, and 5G manifests the highest potential bandwidth. The spread between mean, median, and maximum throughput values indicates a pronounced variability of radio conditions, which is consistent with the heterogeneous propagation environments captured in the dataset. Such disparity is not merely a descriptive artifact; it establishes the operational context in which optimization mechanisms must operate. In other words, the algorithm is compelled to reconcile layers with drastically different performance envelopes, rather than simply equalizing homogeneous resources.

The spatial heterogeneity of the network is further illustrated in fig. 3, where the geographical distribution of cells and their baseline quality-of-experience indicators are presented.

Fig. 3. Inferred cell layer distribution and spatial QoE heterogeneity

Source: generated using the author-developed software

The visualization reveals that QoE values vary considerably across the geographical domain, forming localized “islands” of higher and lower service quality. These irregular patterns underscore the intrinsic complexity of radio resource management: network performance is not uniformly distributed but instead exhibits spatial discontinuities influenced by interference, load concentration, and technology-layer coexistence. The existence of such heterogeneity strongly motivates the use of optimization approaches capable of addressing both cluster-scale imbalance and cell-level anomalies simultaneously.

A deeper examination of cluster conditions is presented in fig. 4, where clusters are ranked according to their calculated stress or risk indicators before and after optimization.

Fig. 4. Cluster stress ranking before and after optimization

Source: generated using the author-developed software

The comparison suggests a systematic reduction in cluster stress levels following the application of the integrated optimization framework. The baseline configuration demonstrates relatively elevated risk values across all clusters, indicating uneven load distribution and localized congestion tendencies. After optimization, however, these values converge toward a narrower and lower interval. This observation implies that the algorithm effectively redistributes network pressure across clusters, thereby mitigating structural imbalance. Importantly, the reduction is not confined to a single cluster but occurs across the entire network topology, which hints at the presence of a coordinated global adjustment rather than isolated local improvements.

A more holistic comparison between optimization strategies is depicted in fig. 5, which evaluates baseline performance alongside traditional cluster-level heuristics, traditional cell-level heuristics, and the proposed integrated multi-layer method.

Fig. 5. Comparative performance of optimization strategies

Source: generated using the author-developed software

The results displayed in fig. 5 reveal a gradual but unmistakable progression of performance improvements across the examined strategies. Traditional heuristics focusing exclusively on cluster-level adjustments yield moderate gains, primarily by redistributing traffic loads. Cell-level heuristics demonstrate slightly stronger improvements due to their ability to refine local radio parameters. Nevertheless, the most substantial enhancement emerges when both levels of optimization operate in concert. The integrated method produces the highest throughput values, the most favorable QoE scores, and the strongest reliability proxy indicators. This outcome indicates that the interaction between macro-scale traffic balancing and micro-scale parameter adaptation generates a synergistic effect that neither method can achieve independently.

The temporal behavior of the optimization procedure is illustrated in fig. 6, which shows the convergence trajectory of the algorithm during successive iterations.

Fig. 6. Integrated optimization convergence

Source: generated using the author-developed software

The convergence curve demonstrates a steady and monotonic improvement of both the average QoE score and the average throughput indicator. Notably, the improvement trajectory does not exhibit oscillatory or unstable behavior, which implies that the feedback mechanism embedded in the optimization algorithm operates within a stable control regime. From a methodological perspective, this stability is particularly important because optimization strategies in radio networks can easily lead to undesirable oscillations when local adjustments propagate across neighboring cells. The gradual ascent of the performance curves suggests that the algorithm successfully balances adaptation speed with system stability.

The structural relationships between key performance indicators are summarized in fig. 7, which presents the correlation matrix calculated from the baseline network state.

Fig. 7. Baseline KPI correlation matrix

Source: generated using the author-developed software

Several correlations stand out in this matrix. Throughput demonstrates a strong positive relationship with QoE, which is intuitively expected, while latency and QoE exhibit a pronounced negative correlation. Interestingly, the matrix also reveals a significant inverse relationship between handover success rate and several congestion-related metrics, indicating that mobility management and resource utilization are tightly intertwined. These correlations justify the use of a composite optimization objective rather than isolated KPI adjustments, since modifying one indicator inevitably influences others.

The technological composition of clusters is illustrated in fig. 8, which shows the distribution of radio technologies within each cluster.

Fig. 8. Cluster-layer technology composition matrix

Source: generated using the author-developed software

This matrix highlights the multi-layer nature of the analyzed network environment. Some clusters are dominated by a single technology layer, while others contain a mixture of 3G, 4G, and 5G cells. Such heterogeneous composition inevitably complicates optimization, because each technology layer possesses distinct spectral characteristics, capacity limits, and interference behaviors. Consequently, optimization strategies that operate exclusively within one technological layer cannot fully exploit the available radio resources. The proposed multi-layer framework explicitly addresses this issue by considering interactions between layers during the optimization process.

Finally, the cumulative distribution of QoE values under different optimization scenarios is depicted in fig. 9.

Fig. 9. QoE distribution shift after optimization

Source: generated using the author-developed software

The curves clearly demonstrate a rightward shift of the QoE distribution when the integrated multi-layer optimization is applied. In practical terms, this means that a larger fraction of users experiences higher quality-of-service levels after the optimization procedure. The improvement is not restricted to the upper tail of the distribution but is observable across the entire range of QoE values. This phenomenon indicates that the algorithm does not merely improve already favorable conditions but also elevates lower-performing segments of the network.

Taken together, the experimental observations suggest that the proposed optimization model operates as a coordinated bi-level control mechanism capable of harmonizing cluster-level traffic equilibrium with cell-level parameter refinement. Traditional optimization strategies often address these two dimensions separately, which inevitably leaves certain inefficiencies unresolved. By contrast, the integrated framework allows macro-scale congestion indicators to guide micro-scale adjustments, while local improvements feed back into the cluster-level performance evaluation. The result is a dynamically balanced network configuration in which improvements in throughput, latency, reliability, and user-perceived quality emerge simultaneously rather than sequentially.

From an analytical standpoint, this behavior strongly supports the conclusion that the proposed multi-layer optimization model constitutes a more effective solution for heterogeneous GSM/UMTS/LTE networks than conventional single-layer heuristics. The model does not merely adjust individual parameters; instead, it orchestrates a coordinated reconfiguration of the network state, thereby enabling the system to evolve toward a more balanced and resilient operational regime.

Conclusions and future research. The study demonstrates that treating heterogeneous cellular infrastructure as a hierarchically coordinated system rather than as a loose collection of independently tuned radio nodes opens a viable pathway toward more balanced network behavior. The proposed multi-layer optimization framework reveals that cluster-level traffic equilibrium and cell-level parameter adaptation can operate as mutually reinforcing mechanisms when embedded within a single feedback-driven analytical loop. Such a configuration allows the network to gradually gravitate toward a more stable operational regime in which improvements in throughput, service continuity, and user-perceived quality arise not through isolated parameter corrections but through systemic reconfiguration of the radio environment.

From a forward-looking perspective, several research avenues naturally unfold from the present work. One promising direction involves extending the proposed framework toward real-time network control environments and integrating predictive intelligence capable of anticipating congestion patterns before they materialize in observable KPI degradation. Another fertile trajectory lies in broadening the optimization domain to include emerging heterogeneous infrastructures that combine terrestrial cellular layers with non-terrestrial or edge-assisted communication elements. Pursuing these directions may ultimately transform multi-layer optimization from a predominantly analytical exercise into a practical foundation for self-adaptive radio access networks capable of autonomously orchestrating complex heterogeneous deployments.

References

  1. Zaid, M., Kadir, M. K. A., Shayea, I., & Mansor, Z. (2024). Machine learning-based approaches for handover decision of cellular-connected drones in future networks: A comprehensive review. Engineering Science and Technology, an International Journal, 55, 101732. https://doi.org/10.1016/j.jestch.2024.101732
  2. Mahamod, U., Mohamad, H., Shayea, I., Othman, M., & Asuhaimi, F. A. (2023). Handover parameter for self-optimisation in 6G mobile networks: A survey. Alexandria Engineering Journal, 78, 104–119. https://doi.org/10.1016/j.aej.2023.07.015
  3. Rago, A., Guidotti, A., Piro, G., Cianca, E., Vanelli-Coralli, A., Morosi, S., Virone, G., Brasca, F., Troscia, M., Settembre, M., Pierucci, L., Matera, F., De Sanctis, M., Pizzi, S., & Grieco, L. A. (2024). Multi-layer NTN architectures toward 6G: The ITA-NTN view. Computer Networks, 254, 110725. https://doi.org/10.1016/j.comnet.2024.110725
  4. Barakabitze, A. A., & Walshe, R. (2022). SDN and NFV for QoE-driven multimedia services delivery: The road towards 6G and beyond networks. Computer Networks, 214, 109133. https://doi.org/10.1016/j.comnet.2022.109133
  5. Ahmed, S., Terefe, T., & Hailemariam, D. (2024). Machine learning for base transceiver stations power failure prediction: A multivariate approach. E-Prime – Advances in Electrical Engineering, Electronics and Energy, 10, 100814. https://doi.org/10.1016/j.prime.2024.100814
  6. Qadir, Z., Le, K. N., Saeed, N., & Munawar, H. S. (2023). Towards 6G Internet of Things: Recent advances, use cases, and open challenges. ICT Express, 9(3), 296–312. https://doi.org/10.1016/j.icte.2022.06.006
  7. Damsgaard, H. J., Ometov, A., Mowla, M. M., Flizikowski, A., & Nurmi, J. (2023). Approximate computing in B5G and 6G wireless systems: A survey and future outlook. Computer Networks, 233, 109872. https://doi.org/10.1016/j.comnet.2023.109872
  8. Gallego-Madrid, J., Sanchez-Iborra, R., Ruiz, P. M., & Skarmeta, A. F. (2022). Machine learning-based zero-touch network and service management: a survey. Digital Communications and Networks, 8(2), 105–123. https://doi.org/10.1016/j.dcan.2021.09.001
  9. Nassef, O., Sun, W., Purmehdi, H., Tatipamula, M., & Mahmoodi, T. (2022). A survey: Distributed Machine Learning for 5G and beyond. Computer Networks, 207, 108820. https://doi.org/10.1016/j.comnet.2022.108820
  10. Oughton, E., Geraci, G., Polese, M., Shah, V., Bubley, D., & Blue, S. (2024). Reviewing wireless broadband technologies in the peak smartphone era: 6G versus Wi-Fi 7 and 8. Telecommunications Policy, 48(6), 102766. https://doi.org/10.1016/j.telpol.2024.102766
  11. Ray, P. P. (2022). A review on 6G for space-air-ground integrated network: Key enablers, open challenges, and future direction. Journal of King Saud University – Computer and Information Sciences, 34(9), 6949–6976. https://doi.org/10.1016/j.jksuci.2021.08.014
  12. Cellular Network Analysis Dataset. Kaggle. https://www.kaggle.com/datasets/suraj520/cellular-network-analysis-dataset

Views: 38

Comments are closed.

To comment on the article - you need to download the candidate degree and / or doctor of Science

Prepare

a scientific article on the current topic

Send

a scientific article to e-mail: editor@inter-nauka.com

Read

your article on the website of our magazine