Regret Function Minimization Algorithms

Introduction. The article addresses the problem of making optimal decisions under uncertainty by minimizing the Savage regret function. This function, which evaluates the difference between the actual outcome and the best possible outcome across all states of nature, is particularly useful in contex...

Full description

Saved in:
Bibliographic Details
Published inKìbernetika ta komp'ûternì tehnologìï (Online) no. 3; pp. 53 - 58
Main Authors Baractari, Anatolie, Chumakov, Borys, Godonoaga, Anatol
Format Journal Article
LanguageEnglish
Published V.M. Glushkov Institute of Cybernetics 29.09.2025
Subjects
Online AccessGet full text
ISSN2707-4501
2707-451X
2707-451X
DOI10.34229/2707-451X.25.3.4

Cover

Abstract Introduction. The article addresses the problem of making optimal decisions under uncertainty by minimizing the Savage regret function. This function, which evaluates the difference between the actual outcome and the best possible outcome across all states of nature, is particularly useful in contexts where decision-makers are risk-averse and sensitive to potential loss. The authors propose two algorithms grounded in the subgradient projection method, adapted to handle a large number of possible states and constraints in a computationally efficient manner. Unlike classical approaches where all states of nature and constraints are evaluated at each iteration, the developed algorithms focus on reduced, representative subsets. This significantly reduces the computational load without sacrificing convergence to the optimal solution. The first algorithm deals with problems that include uncertainty but are unconstrained or lightly constrained. The second extends the approach to more general constraint sets. In both cases, theoretical guarantees regarding convergence are established under specific step-size and selection sequence conditions. The paper highlights how the algorithms avoid the need for computing exact values of the Savage function and its subgradients by using approximations that converge to the true values. This characteristic enhances the applicability of the methods to real-world scenarios where exact modeling of every possible state of nature is infeasible. Future research will focus on implementing and testing these methods on practical problems from economics and operations research, where decision-making under uncertainty is frequent and the number of constraints or states is high. These algorithms offer a practical balance between computational feasibility and decision quality. The purpose of the paper is to develop and justify effective algorithms for regret function minimization in decision-making processes under uncertainty, while ensuring convergence through reduced computational effort by operating on selected subsets of constraints and states. Results. Two algorithms are proposed, each using a modified subgradient projection technique. The first addresses unconstrained or lightly constrained decision problems, while the second handles more general constraints. In both cases, convergence to the optimal regret-minimizing solution is theoretically proven. The computational complexity is significantly reduced without loss of accuracy, thanks to the iterative use of partial subsets at each step. Conclusions. The proposed algorithms provide a reliable and efficient framework for solving regret-based decision problems. They are particularly suitable for large-scale applications where traditional methods are computationally intensive or infeasible. The techniques allow approximation of regret values and subgradients with sufficient accuracy to guarantee convergence. Keywords: decision-making, uncertainty, regret minimization, Savage function, subgradient method, convex optimization.
AbstractList Introduction. The article addresses the problem of making optimal decisions under uncertainty by minimizing the Savage regret function. This function, which evaluates the difference between the actual outcome and the best possible outcome across all states of nature, is particularly useful in contexts where decision-makers are risk-averse and sensitive to potential loss. The authors propose two algorithms grounded in the subgradient projection method, adapted to handle a large number of possible states and constraints in a computationally efficient manner. Unlike classical approaches where all states of nature and constraints are evaluated at each iteration, the developed algorithms focus on reduced, representative subsets. This significantly reduces the computational load without sacrificing convergence to the optimal solution. The first algorithm deals with problems that include uncertainty but are unconstrained or lightly constrained. The second extends the approach to more general constraint sets. In both cases, theoretical guarantees regarding convergence are established under specific step-size and selection sequence conditions. The paper highlights how the algorithms avoid the need for computing exact values of the Savage function and its subgradients by using approximations that converge to the true values. This characteristic enhances the applicability of the methods to real-world scenarios where exact modeling of every possible state of nature is infeasible. Future research will focus on implementing and testing these methods on practical problems from economics and operations research, where decision-making under uncertainty is frequent and the number of constraints or states is high. These algorithms offer a practical balance between computational feasibility and decision quality. The purpose of the paper is to develop and justify effective algorithms for regret function minimization in decision-making processes under uncertainty, while ensuring convergence through reduced computational effort by operating on selected subsets of constraints and states. Results. Two algorithms are proposed, each using a modified subgradient projection technique. The first addresses unconstrained or lightly constrained decision problems, while the second handles more general constraints. In both cases, convergence to the optimal regret-minimizing solution is theoretically proven. The computational complexity is significantly reduced without loss of accuracy, thanks to the iterative use of partial subsets at each step. Conclusions. The proposed algorithms provide a reliable and efficient framework for solving regret-based decision problems. They are particularly suitable for large-scale applications where traditional methods are computationally intensive or infeasible. The techniques allow approximation of regret values and subgradients with sufficient accuracy to guarantee convergence. Keywords: decision-making, uncertainty, regret minimization, Savage function, subgradient method, convex optimization.
Introduction. The article addresses the problem of making optimal decisions under uncertainty by minimizing the Savage regret function. This function, which evaluates the difference between the actual outcome and the best possible outcome across all states of nature, is particularly useful in contexts where decision-makers are risk-averse and sensitive to potential loss. The authors propose two algorithms grounded in the subgradient projection method, adapted to handle a large number of possible states and constraints in a computationally efficient manner. Unlike classical approaches where all states of nature and constraints are evaluated at each iteration, the developed algorithms focus on reduced, representative subsets. This significantly reduces the computational load without sacrificing convergence to the optimal solution. The first algorithm deals with problems that include uncertainty but are unconstrained or lightly constrained. The second extends the approach to more general constraint sets. In both cases, theoretical guarantees regarding convergence are established under specific step-size and selection sequence conditions. The paper highlights how the algorithms avoid the need for computing exact values of the Savage function and its subgradients by using approximations that converge to the true values. This characteristic enhances the applicability of the methods to real-world scenarios where exact modeling of every possible state of nature is infeasible. Future research will focus on implementing and testing these methods on practical problems from economics and operations research, where decision-making under uncertainty is frequent and the number of constraints or states is high. These algorithms offer a practical balance between computational feasibility and decision quality. The purpose of the paper is to develop and justify effective algorithms for regret function minimization in decision-making processes under uncertainty, while ensuring convergence through reduced computational effort by operating on selected subsets of constraints and states. Results. Two algorithms are proposed, each using a modified subgradient projection technique. The first addresses unconstrained or lightly constrained decision problems, while the second handles more general constraints. In both cases, convergence to the optimal regret-minimizing solution is theoretically proven. The computational complexity is significantly reduced without loss of accuracy, thanks to the iterative use of partial subsets at each step. Conclusions. The proposed algorithms provide a reliable and efficient framework for solving regret-based decision problems. They are particularly suitable for large-scale applications where traditional methods are computationally intensive or infeasible. The techniques allow approximation of regret values and subgradients with sufficient accuracy to guarantee convergence.
Author Chumakov, Borys
Godonoaga, Anatol
Baractari, Anatolie
Author_xml – sequence: 1
  givenname: Anatolie
  orcidid: 0009-0005-7827-4946
  surname: Baractari
  fullname: Baractari, Anatolie
  organization: Academy of Economic Studies of Moldova, Chisinau
– sequence: 2
  givenname: Borys
  orcidid: 0009-0005-8606-4746
  surname: Chumakov
  fullname: Chumakov, Borys
  organization: V.M. Glushkov Institute of Cybernetics of the NAS of Ukraine, Kyiv
– sequence: 3
  givenname: Anatol
  orcidid: 0000-0001-7459-9536
  surname: Godonoaga
  fullname: Godonoaga, Anatol
  organization: Academy of Economic Studies of Moldova, Chisinau
BookMark eNp1kE1PwkAURScGExH5Ae7YuWqdz87MkhBREoyJ0YTd5NG-4pDSIdMSxV9vAWXn6r53k3MW95r06lAjIbeMpkJybu-5pjqRii1SrlKRygvSP1e9803ZFRk2zZpSyi2jwqg-uXvFVcR2NN3VeetDPXr2td_4bzg-42oVom8_Ns0NuSyhanD4mwPyPn14mzwl85fH2WQ8T3KmmExkTpkuVdEFpxlklqNFpgpLJVeglTQy47i0SyssitJIMLYoDqgERApiQGYnbxFg7bbRbyDuXQDvjkWIKwex9XmFzuaGd8qSskxKg9oInQkstF6WIgPNOhc_uXb1FvafUFVnIaPuuJw7TOO6mb4cV0442UHsBOUxNE3E8n9m8cf8ABDKb9c
Cites_doi 10.1007/978-1-4757-6015-6
10.34229/2707-451X.24.1.2
10.1080/01621459.1951.10500768
ContentType Journal Article
DBID AAYXX
CITATION
ADTOC
UNPAY
DOA
DOI 10.34229/2707-451X.25.3.4
DatabaseName CrossRef
Unpaywall for CDI: Periodical Content
Unpaywall
DOAJ Directory of Open Access Journals
DatabaseTitle CrossRef
DatabaseTitleList CrossRef

Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: UNPAY
  name: Unpaywall
  url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
Discipline Sciences (General)
EISSN 2707-451X
EndPage 58
ExternalDocumentID oai_doaj_org_article_9c82a75f016448e783763ed77bf36a71
10.34229/2707-451x.25.3.4
10_34229_2707_451X_25_3_4
GroupedDBID AAYXX
ALMA_UNASSIGNED_HOLDINGS
CITATION
GROUPED_DOAJ
ADTOC
UNPAY
ID FETCH-LOGICAL-c1514-4c017f5dc01206a692e9e15d90425a7548462eb9b939e3f84a89dd15144aee0a3
IEDL.DBID UNPAY
ISSN 2707-4501
2707-451X
IngestDate Tue Oct 07 09:28:34 EDT 2025
Tue Oct 07 08:29:49 EDT 2025
Thu Oct 09 00:36:36 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
License https://creativecommons.org/licenses/by-nc-sa/4.0
cc-by-nc
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c1514-4c017f5dc01206a692e9e15d90425a7548462eb9b939e3f84a89dd15144aee0a3
ORCID 0000-0001-7459-9536
0009-0005-8606-4746
0009-0005-7827-4946
OpenAccessLink https://proxy.k.utb.cz/login?url=http://cctech.org.ua/images/docs/Articles/2025/paper_25_3_4.pdf
PageCount 6
ParticipantIDs doaj_primary_oai_doaj_org_article_9c82a75f016448e783763ed77bf36a71
unpaywall_primary_10_34229_2707_451x_25_3_4
crossref_primary_10_34229_2707_451X_25_3_4
PublicationCentury 2000
PublicationDate 2025-9-29
PublicationDateYYYYMMDD 2025-09-29
PublicationDate_xml – month: 09
  year: 2025
  text: 2025-9-29
  day: 29
PublicationDecade 2020
PublicationTitle Kìbernetika ta komp'ûternì tehnologìï (Online)
PublicationYear 2025
Publisher V.M. Glushkov Institute of Cybernetics
Publisher_xml – name: V.M. Glushkov Institute of Cybernetics
References ref0
ref2
ref1
ref4
ref3
ref5
References_xml – ident: ref3
  doi: 10.1007/978-1-4757-6015-6
– ident: ref2
  doi: 10.34229/2707-451X.24.1.2
– ident: ref4
– ident: ref0
– ident: ref5
– ident: ref1
  doi: 10.1080/01621459.1951.10500768
SSID ssj0002910385
ssib044750725
Score 2.3072412
Snippet Introduction. The article addresses the problem of making optimal decisions under uncertainty by minimizing the Savage regret function. This function, which...
SourceID doaj
unpaywall
crossref
SourceType Open Website
Open Access Repository
Index Database
StartPage 53
SubjectTerms convex optimization
decision-making
regret minimization
savage function
subgradient method
uncertainty
SummonAdditionalLinks – databaseName: DOAJ Directory of Open Access Journals
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LS8NAEF6kF_Ug1gfWFzkIvkibfaTZPVaxFKEexEJvyya7q4U2LW2K-u_dSdZSD-LF65JkwjczmZnszjcIXeDIYqKZy9wwz1yBktFQCUXD1OleWMxMmsGvgf5Tuzdgj8N4uDbqC86EVfTAFXAtkXGiktgCFRTjJuHgEUYnSWppW5Xd4yTiYq2YcpYELHZR4i0VvslEABE4nGckSQRE3xGutjgpI0S0_CIeNkncpE32I0iVXP7baHOZz9TnuxqP1wJQdxft-Mwx6FRvXEcbJt9Dde-bi-DKE0hf76PLZ-Oq6CLoupgFuAf9UT6a-IbLoDN-nc5HxdtkcYAG3YeX-17oByKEmQvMLGSZ8x8b6ww6XtuqLYgRBsdagOc5kJhLJohJRSqoMNRyprjQGm5lyphI0UNUy6e5OUIBtqkrZJSNtIvvwnDBNSGGwHg_q9wTGujmGwE5q3gvpKsXSrgkwCUBLkliSSVroDvAaHUhUFaXC06R0itS_qXIBrpdIfy7yA8v8vg_RJ6gLQLjfGGTSZyiWjFfmjOXYxTpeWlOX8NRxMI
  priority: 102
  providerName: Directory of Open Access Journals
Title Regret Function Minimization Algorithms
URI http://cctech.org.ua/images/docs/Articles/2025/paper_25_3_4.pdf
https://doaj.org/article/9c82a75f016448e783763ed77bf36a71
UnpaywallVersion publishedVersion
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVAON
  databaseName: DOAJ Directory of Open Access Journals
  customDbUrl:
  eissn: 2707-451X
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssj0002910385
  issn: 2707-451X
  databaseCode: DOA
  dateStart: 20200101
  isFulltext: true
  titleUrlDefault: https://www.doaj.org/
  providerName: Directory of Open Access Journals
– providerCode: PRVHPJ
  databaseName: ROAD: Directory of Open Access Scholarly Resources
  customDbUrl:
  eissn: 2707-451X
  dateEnd: 99991231
  omitProxy: true
  ssIdentifier: ssib044750725
  issn: 2707-4501
  databaseCode: M~E
  dateStart: 20200101
  isFulltext: true
  titleUrlDefault: https://road.issn.org
  providerName: ISSN International Centre
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9NAEB6V9AAcaAtUBNrKByResh3vrh97TFGjCqkVQkQKJ7PeRxuROFbjiMKvZ8bZVCDUQ3uzVmuvPZ_HM5939xuA18nAJcwIzNySQiNB0TxUUvGwQuylS4StNP0aODvPTsfi0ySdbOqcxrHWJF1KM_jRSsXTOfrUMjYLvYyHfp0YsfU0blSDFJ6lJS9F1Bj3ALazFJPxHmyPzz8Pv1FJuXxAit5dAWR_nEzW85pcMCbjTeN1xNKIR-KfyNQJ-D-Gh6u6Ub9-qtnsr6gz2oHvm70768UmP6JVW0X69_9Sjvd9oF144jPSwHfcgy1bP4U97_PL4K0Xpn73DN58scjO22CEsZDwDM6m9XTuN3IGw9nF4mraXs6Xz2E8Ovn68TT0hRZCjQFfhEKjX7rUaNpJm6lMMittkhpJHq1yJDUiY7aSleTSclcIVUhj6FShrB0ovg-9elHbFxAkrkKCpNzAYN4gbSELw5hlVDbQKbxCH95vjFw2az2NEnlIh0hJiJQEkzdEH44JhpuOJIXdNaA5S-9ZpdQFw3t0pBUmCpsX9Mm0Js8rxzOVJ334cAPi7UNe-yFf3qn3K3hE6NH6ESYPoNderewhJiltddSR-yP_Ov4B8KPg8A
linkProvider Unpaywall
linkToUnpaywall http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Nb9NAEB1BeoAeKKWtGmiRD0hAK9vx7vpjj2lFVCG1QohI4eSu96ONmjhW44iWX89MsqlAFQe4Wau1157n8czz7r4BeJf0XMKMwMwtKTQSFM1DJRUPK8ReukTYStOvgfOL7GwoPo_S0brOaRxrTdKlNIMfLVQ8nqJPzWMz0_O479eJEVtP40Y1SOFZWvJSRI1xT2EjSzEZ78DG8OJL_zuVlMt7pOi9LIDsj5PRal6TC8ZkvG68i1ga8Uj8EZmWAv6b8GxRN-r-h5pMfos6gy24XO_dWS02uYkWbRXpn4-lHP_3gV7CC5-RBr7jNjyx9SvY9j4_Dz54YeqPO_D-q0V23gYDjIWEZ3A-rsdTv5Ez6E-uZrfj9no634Xh4NO307PQF1oINQZ8EQqNfulSo2knbaYyyay0SWokebTKkdSIjNlKVpJLy10hVCGNoVOFsran-B506llt9yFIXIUESbmewbxB2kIWhjHLqGygU3iFLhytjVw2Kz2NEnnIEpGSECkJJm-ILpwQDA8dSQp72YDmLL1nlVIXDO_RkVaYKGxe0CfTmjyvHM9UnnTh-AHEvw9554d8_U-938BzQo_WjzB5AJ32dmEPMUlpq7f-RfwFsIPf-w
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Regret+Function+Minimization+Algorithms&rft.jtitle=K%C3%ACbernetika+ta+komp%27%C3%BBtern%C3%AC+tehnolog%C3%AC%C3%AF+%28Online%29&rft.au=Baractari%2C+Anatolie&rft.au=Chumakov%2C+Borys&rft.au=Godonoaga%2C+Anatol&rft.date=2025-09-29&rft.issn=2707-4501&rft.eissn=2707-451X&rft.issue=3&rft.spage=53&rft.epage=58&rft_id=info:doi/10.34229%2F2707-451X.25.3.4&rft.externalDBID=n%2Fa&rft.externalDocID=10_34229_2707_451X_25_3_4
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2707-4501&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2707-4501&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2707-4501&client=summon