Авторы

Mozolevskyi Dmytro

Город:

Научная степень:

Место работы/учебы:

Статьи автора

Self-Healing System Design: Architectural Patterns for Autonomous Recovery in Cloud-Native Applications

Автор: , и

Библиографическое описание статьи для цитирования:

, и. Self-Healing System Design: Architectural Patterns for Autonomous Recovery in Cloud-Native Applications//Наука онлайн: Международный научный электронный журнал. - 2023. - №9. - https://nauka-online.com/ru/publications/information-technology/2023/9/05-27/

Аннотация: (English) This article analyzes architectural patterns that enable autonomous recovery in cloud-native systems, which are essential for maintaining high availability and performance. Three primary patterns are examined: Redundancy & Replication, Proactive Recovery, and Auto-Scaling. The study evaluates their effectiveness using real-world data, providing a comparative assessment based on metrics like cost reduction and performance improvement. The analysis underscores the necessity of these patterns for managing the operational complexity of modern distributed systems. Recommendations are provided for implementing these strategies to enhance the reliability and cost-efficiency of cloud applications.

Understanding the causes of hallucinations in large language models

Автор:

Библиографическое описание статьи для цитирования:

. Understanding the causes of hallucinations in large language models//Наука онлайн: Международный научный электронный журнал. - 2024. - №9. - https://nauka-online.com/ru/publications/information-technology/2024/9/03-35/

Аннотация: (English) Hallucinations in large language models (LLMs) are a systemic problem that manifests itself when models generate information that does not correspond to the ground truth or input data. This phenomenon significantly limits the application of LLMs in mission-critical domains such as medicine, law, research, and journalism, where the accuracy and reliability of information are of utmost importance. This paper provides a comprehensive analysis of three key factors that contribute to hallucinations: issues related to the quality and structure of training data; architectural features of transformer models that predispose them to error accumulation; and the lack of built-in fact-checking mechanisms, due to which models rely solely on statistical regularities. Each of these factors is discussed in detail using relevant research, and potential solutions are proposed. The paper includes three dedicated graphs that visualize the relationship between various model parameters and the occurrence of hallucinations. The results of the study indicate the need for a comprehensive approach to improving LLM, including both improving data preprocessing methods and modifying the model architecture and introducing additional verification mechanisms.

Подготовьте

научную статью на актуальную тему

Отправьте

научную статью на e-mail: editor@inter-nauka.com

Читайте

Вашу статью на сайте нашего журнала