Operational Research – Tessa Wilkie /stor-i-student-sites/tessa-wilkie PhD student in Statistics and Operational Research at STOR-i CDT, ¶¶ŇőAPPµĽş˝ Fri, 01 May 2020 13:53:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Algorithm Aversion /stor-i-student-sites/tessa-wilkie/2020/04/22/algorithm-aversion/ /stor-i-student-sites/tessa-wilkie/2020/04/22/algorithm-aversion/#respond Wed, 22 Apr 2020 17:11:17 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=212 So, you’ve created a brilliant solution to an operational research problem. But — not everyone is using it. What’s going on? Read on to find out.

Operational researchers spend their time trying to come up with solutions to problems businesses face such as: how much stock a business should order in each week; the most efficient route a delivery driver can take; the most profitable combination of products to sell. 

But on the other side, are the businesses that are going to use these solutions. Researchers’ solutions might well be rigorous and elegant (and they should be), but: these solutions are going to be used by people.  

And these people can choose whether to use it or not.

It turns out they might take some convincing.

In the last 10 years, several papers have come out exploring what researchers can do to encourage organisations to use OR solutions — when they are better than human judgement alone. 

Not everyone, it seems, has absolute faith in the power of mathematical or algorithmic solutions to problems like forecasting.

My dog Markus (pictured below) for example, will almost certainly prefer to use his nose, plus a certain amount of running about in random directions, to search for snacks, over an .

Markus: and a very fine nose it is too.

On a more serious note, some studies have shown that people are less likely to use an algorithm for prediction if they have seen that it is can get things wrong. This is known as Algorithm Aversion[1]. If they know the algorithm is not perfect, they are put off from using it.

Anecdotally, I see this with — for example — political polling. A lot of people seem to write off polls as nonsense, because they don’t always get things 100% right. Either they are perfect and worth following, or they contain error and are rubbish.

Back to Algorithm Aversion: one way to overcome this [2] is to allow people to adjust the output of the algorithm, in a controlled manner.

Markus photographed moments after I tried to explain an optimised search strategy to him.

Dietworst, Simmons and Massey (2018) found that if people were allowed to adjust an algorithm’s forecast, they were happier with it. Restricting the amount that users could adjust forecasts did not make a lot of difference to their satisfaction.

Of course, in a real life situation, it may make a lot of sense for someone in a business to adjust a forecast produced by an algorithm: if they know something the algorithm doesn’t[3].

For example, if the business is about to launch a big advertising campaign or slash prices — or if a close competitor has just opened a shop right opposite yours.

This is a new area of research and so far relies on some limited field experiments, with sometimes seemingly contradictory results.

Beware our own expertise

A paper published last year[4] suggested that people were likely to choose an algorithm’s advice over that of other people.

However, they were a little bit less likely to pick an algorithm’s opinion over their own.

The paper also found that people they determined to be experts were much less likely to take algorithmic advice over their own opinion, and that that hurt the accuracy of their predictions.


[1] Dietvorst, B.J., Simmons, J.P. and Massey, C., (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), p.114.

[2] Dietvorst, B.J., Simmons, J.P. and Massey, C., (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3):1155-1170.

[3] Fildes, R., Goodwin, P., Lawrence, M. and Nikolopoulos, K., (2009). Effective forecasting and judgmental adjustments: an empirical evaluation and strategies for improvement in supply-chain planning. International Journal of Forecasting, 25(1):3-23.

[4] Logg, J.M., Minson, J.A. and Moore, D.A., 2019. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151:90-103.

]]>
/stor-i-student-sites/tessa-wilkie/2020/04/22/algorithm-aversion/feed/ 0
Censored demand /stor-i-student-sites/tessa-wilkie/2020/03/23/censored-demand/ /stor-i-student-sites/tessa-wilkie/2020/03/23/censored-demand/#comments Mon, 23 Mar 2020 17:41:55 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/tessa-wilkie/?p=183 Censored demand happens when a shop — or any other type of retailer — runs out of stock. How do they know how much more they could have sold? Having a good handle on this is important for retailers, particularly those that stock perishable goods.  This post will explore ways in which mathematical models can help them to do that.

As part of our assessments here at STOR-i we have to write a short report on a research topic of our choice (as well as a long one, which I’m working on now). For my short report I wrote about retail analytics, and particularly the issue of censored demand. You can see the link to the report at the bottom of this post.

Forecasting demand for retailers is a thorny problem. They need to estimate how much they are going to sell to decide how much stock to order.

But, unless they have very high levels of stock, they are probably going to have days when they sell out. So then how do they decide how much demand is actually out there?

This is a particularly big problem for those that stock perishable goods. If a retailer were to order in a mountain-sized pile of grapes, for example, they would have to throw away what they didn’t sell after just a few days.

Waste is something retailers need to avoid — not just for their profits, but there are global environmental reasons why we should all be trying to cut down on waste.

On the other hand, if a retailer regularly runs out of stock, they could find that customers decide to shop elsewhere.

Mathematical models can help us to overcome some of this uncertainty.

In my report I focused on parametric models. This means we assume that the demand conforms — in gross — to an underlying mathematical distribution.

This is helpful, because if we can observe a bit of the distribution, we can gain insight into what the bits we cannot see might be like.

More formally, we can use the observed demand (the demand recorded before the retailer ran out of stock) to make inference about the unobserved demand (the demand that the retailer didn’t fulfil after they ran out of stock).

I look closely at two methods in my report: one to deal with normally distributed data (Nahmias’ method) [1], and one to deal with demand that corresponds to a Poisson distribution (Conrad’s method) [2].

In my report I show that these methods work nicely, as long as we have picked the right distribution.

But not all retail demand behaves nicely and conforms to the distribution we assume (or sometimes any distribution at all).

What can go wrong?

What follows is an illustration of what can happen if we use a good method but we assume the wrong underlying distribution.

In the picture below (nabbed from my report), I’ve simulated data from a bimodal distribution.

In this case it is data from two normal distributions, with different means and variances. In the picture, I’ve plotted a histogram of the simulated data.

I create a right-censored data set by removing any data points with a value higher than 120. The removed data is represented by the dark blue columns.

I then look at what happens if I assume (mistakenly) that my data is normally distributed. So I then use Nahmias’ method to estimate the distribution based on the only data I can now use (the light blue columns).

Estimating censored demand: I’ve mistakenly assumed my data is normally distributed

And so I’ve got things very wrong. The red line represents what I think the true demand looks like. You can see I totally miss the second (unobserved) peak.

Nahmias’s method is really good on censored data that comes from a normal distribution but I’m deliberately tripping it up by giving it a nasty (but plausible) underlying distribution.

This is a simulation, so I known I’m getting it wrong. Bear in mind that if this was a real world situation I would only be able to see the light blue columns. Based on that, assuming normally distributed data maybe isn’t great, but it would not be completely silly either.

If you want to read more about this subject, below are some links to papers I mention in this post (as well as my report).


[1] Nahmias, S. (1994).  Demand estimation in lost sales inventory systems. Naval Research Logistics (NRL), 41(6):739–757.

[2] Conrad, S. (1976).  Sales data and the estimation of demand. Journal of the Operational Research Society, 27(1):123–127.

]]>
/stor-i-student-sites/tessa-wilkie/2020/03/23/censored-demand/feed/ 1