Janusz Kacprzyk

  • Fellow of IEEE, IET, IFSA, EurAI, IFIP, SMIA
  • Full Member, Polish Academy of Sciences
  • Member, Academia Europaea
  • Member, European Academy of Sciences and Arts
  • Member, European Academy of Sciences
  • Foreign Member: Finnish Society of Sciences and Letters
  • Foreign Member: Bulgarian Academy of Sciences
  • Foreign Member: Royal Flemish Academy of Belgium for Sciences and the Arts
  • Foreign Member: Spanish Royal Academy of Economic and Financial Sciences (RACEF)

Systems Research Institute, Polish Academy of Sciences

ul. Newelska 6, 01–447, Warsaw, Poland

Email: kacprzyk@ibspan.waw.pl


We are concerned with the broadly perceived decision making and optimization models, possibly solved by using a computer aid or support, used for complex human centric situations. The solutions obtained should be trustworthy to the human agents involved, and hence easier acceptable and implementable. We postulate that a good solution in this respect would be to reflect in the models considered some human specific characteristocs, notably some human specific cognitive biases which are some deviations from what traditional models, usually based on the utility maximization, postulate, and which are often followed by humans.

First, we briefly review the main classes of cognitive biases exemplified by, to just mention a few: (1) decision making, belief and behavioral biases (e.g. the bandwagon effect, i.e. to do what a majority thinks), (2) social biases (e.g. status quo bias, i.e. a tendency to defend and bolster the status quo, and avoid changes), (3) memory errors and biases (e.g. consistency bias, i.e. the present  resembles the past), etc.

We concentrate on the status quo bias, i.e. an averse to bigger changes, in optimization based regional sustainable agriculture development planning model under imprecise (fuzzy) information. Moreover, we show how to reflect an equity and fairness orientation. Finally, we mention some possible approaches to debiasing, i.e. the limiting of effects of cognitive biases.