Andraszewicz, S., & Rieskamp, J. (2017). Response to "A note on the standardized covariance".
Journal of Mathematical Psychology, 77, 185-186.
In a recent paper, Andraszewicz and Rieskamp (2014) proposed the standardized covariance
as a measure of association, similarity and co-riskiness. Budescu and Bo (in press) wrote
a comment on the proposed measure, in which they interpret the standardized covariance as
a measure of additive association, or a "measure of disparity between the ranges of outcomes
offered by two lotteries" (Budescu and Bo, in press). In the reply to this comment, we
point out that the statistical interpretation of the standardized covariance provided by
Budescu and Bo (in press) is strongly linked with its cognitive interpretation. Also, in
this comment, we give a cognitive interpretation to the Budescu and Bo's (in press) analytical
findings of the similarity measure for statistically independent gambles proposed by
Andraszewicz and Rieskamp (2014).
Murphy, R.O., Andraszewicz, S. & Knaus, S.D. (2016). Real options in the laboratory: An experimental
study of sequential investment decisions. Journal of Behavioural
and Experimental Finance, 12, 23-39.
Many real-life risky decisions in finance and management are dynamic and decision policies can
be adapted as uncertainty is reduced by the arrival of new information. In this type of situation,
called a real options problem, a decision maker must choose how much of his finite resources
to invest in a dynamic risky environment. In two laboratory experiments, we test a well-defined
decision problem with the central characteristics of a real options framework and do so in such
a way that it is amendable to formal modeling. We find that people choose differently than
the expected value maximizing policy, consistent with risk aversion and non-linear probability
weighting. We conclude that although real options analysis is useful as a normative valuation
method, its recommendations are sometimes contrary to people's innate tendencies when making
risky choices and this counterintuitiveness should be considered when implementing real options
analysis in training and practice.
Andraszewicz, S., Scheibehenne, B., Rieskamp, J., Grasman, R.,
Verhagen, J., & Wagenmakers, E-J. (2015). An introduction to Bayesian
hypothesis testing for management research. Special Issue: Bayesian Probability and Statistics
in Management Research. 41 (2), 521-543.
In management research, empirical data are often analyzed using p-value null hypothesis
significance testing (pNHST). Here we outline the conceptual and practical advantages of
an alternative analysis method: Bayesian hypothesis testing and model selection using the
Bayes factor. In contrast to pNHST, Bayes factors allow researchers to quantify evidence
in favor of the null hypothesis. Also, Bayes factors do not require adjustment for the
intention with which the data were collected. The use of Bayes factors is demonstrated
through an extended example for hierarchical regression based on the design of an experiment
recently published in the Journal of Management. This example also highlights the fact that
p values overestimate the evidence against the null hypothesis, misleading researchers
into believing that their findings are more reliable than is warranted by the data.
Andraszewicz, S., Rieskamp, J., & Scheibehenne, B. (2015). How outcome dependencies affect decisions
under risk. Decision, 2(2), 127-144.
Download paper Download
Many economic theories of decision making assume that people evaluate options independently
of other available options. However, recent cognitive theories such as decision field theory
suggest that people's evaluations rely on a relative comparison of the options' potential
consequences such that the subjective value of an option critically depends on the context
in which it is presented. To test this prediction, we examined pairwise choices between monetary
gambles and varied the degree to which the gambles' outcomes covaried with one another.
When people evaluate options by comparing their outcomes, a high covariance between these
outcomes should make a decision easier, as suggested by decision field theory. In line with
this prediction, the observed choice proportions in 2 experiments (N = 39 and 24, respectively)
depended on the magnitude of the covariance. We call this effect the covariance effect.
Our findings are in line with the theoretic predictions and show that the discriminability
ratio in decision field theory can reflect the choice difficulty. These results confirm
that interdependent evaluations of options play an important role in human decision making
under risk and show that covariance is an important aspect of the choice context.
Andraszewicz, S., & Rieskamp, J. (2014). Standardized covariance - a measure of association, similarity and
co-riskiness between choice options. Journal of Matethematical
Psychology, 61, 25-37.
Predictions of prominent theories of decision making, such as decision field theory and
regret theory, strongly depend on the association between outcomes of choice options. In
the present work, we show that these associations reflect the similarity of two choice
options and riskiness of one option with respect to the other. We propose a measure
labeled standardized covariance that can capture the strength of the association, similarity
and co-riskiness between two choice options. We describe the properties and interpretation
of this measure and show its similarities to and differences from the correlation measure.
Finally, we show how the predictions of different models of decision making vary depending
on the value of the standardized covariance, which can have implications for research on
decision making under risk.
Andraszewicz, S., Yamagishi, J., & King, S. (2011). Vocal attractiveness of statistical speech synthesisers.
In Icassp (p. 5368-5371). Prag, Czech Republic: IEEE.
Our previous analysis of speaker-adaptive HMM-based speech syn- thesis methods suggested
that there are two possible reasons why average voices can obtain higher subjective scores
than any individual adapted voice: 1) model adaptation degrades speech quality proportionally
to the distance "moved" by the transforms, and 2) psychoa- coustic effects relating to the
attractiveness of the voice. This paper is a follow-on from that analysis and aims to separate
these effects out. Our latest perceptual experiments focus on attractiveness, using average
voices and speaker-dependent voices without model transformation, and show that using
several speakers to create a voice improves smoothness (measured by Harmonics-to-Noise Ratio),
reduces distance from the the average voice in the log F0-F1 space of the final voice and
hence makes it more attractive at the segmental level. However, this is weakened or overridden
at supra-segmental or sentence levels.