Ed Mellor – Edward Mellor /stor-i-student-sites/edward-mellor PhD Student at STOR-i CDT, ¶¶ŇőAPPµĽş˝ Thu, 30 Apr 2020 15:07:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /stor-i-student-sites/edward-mellor/wp-content/uploads/sites/7/2021/08/cropped-cropped-EMlogo-32x32.png Ed Mellor – Edward Mellor /stor-i-student-sites/edward-mellor 32 32 The Patrol Problem /stor-i-student-sites/edward-mellor/2020/04/26/the-patrol-problem/ Sun, 26 Apr 2020 13:47:05 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=205 Read more]]> In my previous post, I talked about the statistics research project that I did as part of the STOR-i program. Today I will discuss the Operational Research project I worked on with Kevin Glazebrook about Optimal Patrolling.

Consider an art gallery with several rooms. Some of these rooms are connected directly by doorways but for some pairs of rooms it may be necessary to pass through one or more intermediary rooms in order to travel between them. Each room in the gallery contains various valuable pieces of artwork. At night, when the gallery is closed, a single guard must patrol the area to prevent thievery or vandalism from instituters (attackers). The Patrol Problem is to find a patrol route that minimizes the expected cost of any damage caused by attackers.

To approach this problem we must first create a model and make some modelling assumptions.

We can use the ideas from my post on The seven bridges of Königsberg to represent the rooms of the gallery as nodes on a graph as shown in the example below:

We assume that the total value of the artwork in each room is known to both the patroller and any potential attacker. We also assume that the length of time taken to carry out an attack in any given room is random but is sampled from a known distribution.

Our patrol model assumes that the attackers arrive according to a Poisson process with a known rate and then decide which room to attack in one of the two follow ways:

  1. The target of the attack is chosen at random with known probabilities.
  2. The target of each attack is chosen strategically with the presence of a patroller in mind and the aim to maximize the total expected cost of the attacks.

The patroller is assumed to move between rooms in discrete time-steps. If the patroller interrupts an attack in progress, we assume that no damage is caused.

We need a way to tell the patroller which is the best route to take.

If the attackers choose where to attack using the randomised method we have the following:

While visiting a location the patroller either determines that no attacks are underway or apprehends the attacker. Thus, we know that immediately after a visit to a location, no attackers are present. It therefore makes sense to characterize the system by a vector containing the number of time-steps since patroller visited each room. We call this the state of the model.

If we assume that the time it takes to carry out an attack has some maximum, we can ensure the number of states is finite. This is because once we have neglected a room long enough, increasing the time since the last visit will not change the probability that an attack is ongoing.

The current room can be determined from the state as it will correspond to the entry with the lowest value. A patrol policy then tells the patroller what to do in any given state: either stay where you are or move to an adjacent room.

Since there are a finite number or states and a finite number of rooms we have a finite number of policies. An optimal policy can be found using linear programming.

If the attackers choose where to attack strategically we can create a two-person zero sum game as discussed by in my post on game theory.

In either case the optimal solution is very computationally expensive to calculate and so approximate methods are often preferred.

]]>
Introduction to Extreme Value Theory /stor-i-student-sites/edward-mellor/2020/04/17/introduction-to-extreme-value-theory/ Fri, 17 Apr 2020 13:06:46 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=198 Read more]]> In my last post I promised an overview of my two research topics. We were encouraged to choose one topic from Statistics and the other from Operational Research. Today we will focus on the more statistical topic which I was introduced to by Emma Eastoe.

In statistics we are often interested in determining the most likely behaviour of a system. The usual way to do this would be to fit a model to the observations from the system. This can be done by finding a family of distributions that approximately describes the shape of the data. This family of distributions (or model) will have certain parameters. The observations can then be used to estimate the value of these parameters which maximises the probability of that set of observations occurring. In some situations however, the normal behavior of a system is of less concern to us and we are instead interested in the maximum (or minimum) outcome that we would expect to observe over an extended period of time. For example, if a local council is considering investment in flood defences they are not interested in the average height of the river but only in the events where the volume of water would exceed the river’s maximum capacity and cause flooding.

The problem here is that we are considering very unusual events that any distribution which was fitted to the entire set of observations would be unable to reliably estimate. We therefore require models that can be fitted to just the extreme events. There are two main approaches to consider: the Block Maxima Model and the Threshold Excess Model. Each of these approaches can by characterised by their different way of classifying an event as extreme.

  • Block Maxima Model: Here we partition the data into equal sections and then take the maximum data-point in each block to be an extreme event. The distribution of these maxima belongs to a specific family of distributions called the Generalised Extreme Value Family.
  • Threshold Excess Model: This approach considers all events that are above a certain threshold to be extreme. It can be shown that for a sufficiently high threshold these values will follow a Generalised Parito Distribution.

In both models we have an important decision to make. For the Block Maxima Model we must choose a block length and in the Threshold Excess Model we must set a threshold. These decisions play a very similar role in that they determine the number of points we have to fit our model to. If the block size is set too large or the threshold too high we will not have enough points to fit our distribution which will result in greater variance in the result. On the other hand if the block size is too small or the threshold too low the resulting points will not be well approximated by the Extreme Value or Parito distribution respectively.

Sometimes the data we are looking at is multidimensional. For example, if we want to describe extreme storm events for applications in shipping we may have data for wind and rain. These different variables may depend on each other or could be completely independent. Having more that one dimension imposes another difficulty – what do we want to consider as an extreme event. Do we need extreme values for wind and rain or is just one of the variables being extreme enough for an event to be considered extreme? Both the Block Maxima and Threshold Excess approaches can be extended to consider higher dimensions.

In my next post I will talk about my Operational Research topic: Optimal Patrolling.

]]>
The seven bridges of Königsberg /stor-i-student-sites/edward-mellor/2020/04/08/the-seven-bridges-of-konigsberg/ /stor-i-student-sites/edward-mellor/2020/04/08/the-seven-bridges-of-konigsberg/#comments Wed, 08 Apr 2020 11:05:35 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=183 Read more]]> During the spring term at STOR-i we were given the opportunity to work on two independent projects with the guidance of an academic supervisor. My first research topic was Extreme Value Theory with Emma Eastoe and my second was on Optimal Patrolling with Kevin Glazebrook. I plan to briefly discus both these projects in future blog posts.

To be able to talk about Optimal Patrolling we will need a basic understanding of graph theory. In this context, the word graph does not mean a plot, with two axis and line showing the relationship between two variables. Instead a graph is an way of repressing and visualizing connections. This is best explained by example and so in this post, I will talk the problem which first introduced me to this type of graph.

The Pregel River divides the city of Königsberg into four landmasses as shown below. Seven bridges (shown in black) connect these landmasses. The problem is then to find a route around the city visiting each of the four landmasses by crossing each of these bridges exactly once. No other means of crossing a river is permitted.

In 1736 a mathematician called Leonard Euler proved that the problem has no solution. A detailed explanation of the proof is given by the Mathematical Association of America.

The general idea is as follows:

  • From inspection we can see that each of the landmasses has an odd number of bridges leading to/from it.
  • So it makes sense that if we start at a particular landmass then if we use each bridge connecting to it exactly once then we cannot end our route there.
  • Equivalently, for any location that we don’t start we must end our route there.
  • Since this implies that we must finish at three of the four landmasses no such route can exist.

We can generalize this problem by thinking of each of the landmasses as a point. The bridges can then be added by drawing a line between the two points associated with the landmasses on either side of the bridge. This is shown for the Königsberg problem below:

We call this a graph. The points representing the landmasses are called nodes (or vertices) and the lines representing the bridges are called edges (or arcs).

Graphs like this can be used to represent connections in a huge variety of real life situations. For example, we might want to represent the rooms in a museum and the doorways that connect them. We could also consider more abstract connections like friendships between school children. The study of such graphs is called Graph Theory and it was problems like the seven bridges of Königsberg that originally motivated its development. This definition of a graph can be extended to include things like:

  • Weighted edges that might represent some maximum capacity of a road connecting two cited
  • Directed edges that suggest the connection is only permitted one way. This may be useful if we are using a graph to represent roads which could include a one-way system.
  • Hyper-edges that can be used to show a connection between more than two nodes.

In the Optimal Patrolling problem we will use this formulation to represent different areas as nodes and the ways between them as edges.

]]>
/stor-i-student-sites/edward-mellor/2020/04/08/the-seven-bridges-of-konigsberg/feed/ 1
Game Theory feat. Sherlock Holmes /stor-i-student-sites/edward-mellor/2020/03/31/game-theory-feat-sherlock-holmes/ /stor-i-student-sites/edward-mellor/2020/03/31/game-theory-feat-sherlock-holmes/#comments Tue, 31 Mar 2020 11:03:42 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=165 Read more]]> Game theorists develop strategies for competitive situations where multiple players make decisions, each of which affect the outcome. Each player has some utility function that they are trying to maximise. Usually the best option for a given player is dependent on what the other players choose. Such situations are referred to as games.

In this blog we will discuss a particular type of game called a two-person zero-sum game. This is a two player game where the utility function of one player is exactly the negative of the utility function of the other. It is therefore sufficient to only consider the utility function of one player.

We will consider an example from Alan R. Washbern’s book: where each of the players have only two options. Since the players have a finite number of options they can write all possible outcomes in a matrix and so this is called a matrix game.

As promised this example features Sherlock Holmes and his nemesis Professor James Moriarty and is inspired by ‘s book The Final Solution.

In the book, Holmes boards a train from London to Dover in an attempt to escape from Moriarty. As the train pulls away from the station the pair see each other. Holmes is aware that Moriarty has the necessary resources to overtake the train and be waiting for him in Dover. Holmes now has two options: to take the train all the way to Dover or to get off early at the only intermediate station – Canterbury. Moriarty is aware of these options and so must choose to wait for Holmes in either Canterbury of Dover.

If both choose to go to the same place Holmes has no chance of escape. If Holmes chooses Dover while Moriarty chooses Canterbury, Homes can safely escape to mainland Europe. Finally, if Holmes Chooses Canterbury and Moriarty chooses Dover, there is still a 50% chance that Holmes will be captured before he escapes the county. Thus, we have the following matrix M of escape probabilities where the rows represent Holmes’ choice and the columns represent Moriarty’s choice.

Sherlock wants to maximise his chance of escape so what is his optimal strategy?

There are two ‘pure strategies’ available: to go to Canterbury or to go to Dover. Alternatively, Holmes can create a ‘mixed strategy’ by going to Canterbury with probability p and Dover otherwise. We can recover the two pure strategies by letting p=0 or p=1.

Assume Moriarty chooses to go to Canterbury with probability q then the probability of escape is given by (p + 2q – 3pq)/2. We can see that if Moriarty chooses q=1/3 then this probability is 1/3 regardless of the choice of p. Additionally, if Holmes chooses p=2/3 then he can also ensure the escape probability is 1/3 regardless of the choice of q. If Holmes/Moriarty then changes their strategy, their chances of escape/capture will decrease.

We see that the highest guaranteed utility for one player corresponds exactly to the lowest guaranteed utility for the other. The von Neumann’s Theorem which Washburn goes on to prove in his book (linked above) tells us that this will always be true when both players have an optimal policy.

]]>
/stor-i-student-sites/edward-mellor/2020/03/31/game-theory-feat-sherlock-holmes/feed/ 1
Optimal strategy for ‘Guess Who?’? /stor-i-student-sites/edward-mellor/2020/02/26/optimal-strategy-for-guess-who/ Wed, 26 Feb 2020 15:01:06 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=160 Read more]]> As I was growing up, one of my favourite games was Guess Who?. This two player game was originally created by but is now owned by .  Each player is allocated one of 24 possible characters from the table of names below. The players then take turns to ask yes/no questions to guess the other person’s character.

AlexAlfredAnitaAnneBernardBillCharlesClaire
DavidEricFrankGeorgeHermanJoeMariaMax
PaulPeterPhilipRichardRobertSamSusanTom

The player who eliminates all but one of the possible candidates in the least amount of moves is the winner. If both players correctly identify their opposition’s character in the same amount of moves it’s a draw.

I recently saw a video by online claiming to have a best Guess Who? strategy and so being a mathematician I wanted to check the numbers. In this blog we will consider what constitutes a good question and whether Rober really does have an optimal strategy.

There is an unlimited amount of questions that could be asked but many of these are equivalent.  For example, the answer to “Is your person male?”, “Is your person female?” and “is your person called Anita, Anne, Claire, Maria or Susan?” will all rule out the same candidates.

Any question will split the remaining candidates into two groups. Since each candidate that has yet to be ruled out is equally likely, we can consider any two questions equivalent if the smaller of the groups they each identify contain the same number of candidates.

Suppose we have n candidates remaining. Since we are asking a question, we must have that n is greater than 1 and since we can automatically rule out our own character we also know that n can be at most 23.

We can now define a question by the size of the smaller of the two groups that it splits the candidates into. We will call this number m. If m=0 the question is giving us no new information and therefore does not help us. We will therefore assume m is at least 1. Since m is the size of the smaller group it is at most n/2 or (n-1)/2 if n is odd. Note that such a question will always exist since for any candidate we can ask if their character name comes before them alphabetically.

Thus in our first turn we effectively have (23-1)/2=11 choices for what to do.

The best choice becomes clear if we consider the extreme cases. If m=1 we could get the correct solution straight away but this will only happen 1 in every 23 tries. The rest of the time we will only rule out one candidate. If we proceed to guess one candidate each time it will take us on average 11.5 guesses. On the other hand if we take m to be as big as possible we will rule out at least 11 candidates so it will take at most 5 guesses.

This idea is explained in more detail in the following video by Mark Rober:

The strategy laid out in Rober’s video is therefore to always try to ask a question that divides the candidates into two groups that are as even as possible. (i.e. choose m to be as big as possible)

But is this always the best approach?

If our opponent guesses correctly first time our only chance to draw is to also make a guess.

Similarly if we suppose our opponent went first and their first question rules out all but two candidates, they are then guaranteed to win on their next question. With our current strategy it is impossible for us to win in two moves and so in this situation it might be better to play more aggressively by asking a question with a smaller m.

Our choice of move could also depend on how we weight draws verses wins.  If we want to win outright then our only option is to make a guess. If we would rather win but are happy with a draw we need to make some calculations. Again we will only consider the two extreme cases:

Option 1: Make 2 random guesses

  • This gives us a 1/23 chance to win and (22/23)(1/22)=1/23 chance to draw

Option 2: Divide the group as evenly as possible then make a guess

  • This has zero of winning and (11/23)(1/11)+(12/23)(1/12) =2/23 chance to draw

Therefore, it is always better to make guesses in this situation. This highlights that although the Guess Who? strategy described by Rober is very good in most situations if we take the other players actions into account it is not optimal.

]]>
Functional Data: Making height prediction less of a tall order /stor-i-student-sites/edward-mellor/2020/02/05/functional-data-making-height-prediction-less-a-tall-order/ Wed, 05 Feb 2020 16:40:01 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=129 Read more]]> As part of the second term of STOR-i MRes programme we receive talks on a variety of potential research topics. One such talk, by from ¶¶ŇőAPPµĽş˝, discussed the use of functional data. In this blog, I will explore one of the examples used in her presentation.

At 193cm I am the tallest member of the STOR-i programme — including both my fellow MRes student and all the PhDs! I was also taller than the vast majority of my peers during my undergraduate studies and the University of Exeter and one of the tallest people in my sixth form.

My parents recorded my height every year as I was growing up, so would it have been possible for my parents to use this information to predict how tall I would be as an adult?

The first thing they might have considered is their own heights. My dad is 182cm and my mum is 168cm. At ten years old I was 149cm so since they were both shorter at that age than I was, they might have (correctly) guessed that I would grow to be taller than both of them. To guess exactly how much taller is where things get more difficult.

If my parents had height data for other children as they grew into adulthood they could made a prediction about my future height by looking at the adult heights of different people that were a similar height at ten years of age. However, children grow at different rates and often two people who are exactly the same height as children may be very different heights as adults. In particular girls and boys tend to grow at different ages. Often girls are taller than boys at about eleven or twelve but do not tend to grow as much during their teenage years.

Instead of just considering a child’s height at a fixed time (for example at ten years old) we can instead look at their height each year up to adulthood. Note that although we only have a fixed number of observations we can fit a smooth line through these points to make a continuous function. We can therefore think of a child’s height as a function of time. So, for my height function we have f(10)=149cm and f(23)=193cm.

The figure below, kindly provided by Dr Park, shows the height functions for several individuals:

Since these functions are smooth, we can differentiate them to get a curve for the rate of growth. This is shown in our second figure below (also provided by Dr Park):

Here we show the child’s rate of growth with velocity in centimeters per year.

Although these functions are all different, we can notice some similarities. In each case, the child grows very quickly when they are very young and then growth gradually slows until they are about six. It then spikes again during puberty which happens once, usually between the age of eleven and seventeen. After this the rate of growth gradually drops to zero. Many of the curves also have several smaller peaks and troughs in various positions.

This leads us to the question: what does a normal growth curve look like?

If we are just looking at height at a particular age we can simply calculate the mean and can even produce some confidence interval for that estimation. But how do we find the mean for a function? A naĂŻve approach would be to calculate a function, such that for any age, the value of the mean function is just the mean of all the functions, evaluated at that age. But since the peaks caused by puberty occur at different ages for different people, averaging in such a way would produce a much wider peak over multiple years that isn’t representative of a realistic growth rate.

What we can do instead is to first find the mean age for puberty (we will call this the structural mean) and scale each of the curves to fit this mean. Dr Park produced a graph that illustrates this step:

A function that is defined by taking the mean at each point in time of these new curves will now produce a much more realistic mean height function.

So how could my parents have used this to estimate my future height?

As a tall person it is unlikely that my growth curve would be particularly close to the average. This is where we need to consider my recorded heights up to the age of ten as well. Ideally we want the people to which we are comparing my height, to be as similar to me as possible as these people are likely to have a more similarly shaped height function. For example, only considering the average height of boys makes sense. Ideally we would also want to only consider boys whose parents were a similar height to mine and who were also a similar height to me at every age up to ten, although this may not be possible unless we have access to a lot of data.

Since, we only have my height function up to age ten we can then scale this average height function to match my data as closely as possible and then integrate in to find an estimate of my height.

]]>
Annual STOR-i conference 2020 /stor-i-student-sites/edward-mellor/2020/01/23/annual-stor-i-conference-2020/ Thu, 23 Jan 2020 14:01:58 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=110 Read more]]> As promised in my previous blog post I will be talking today about my first experience of an academic conference.

This year STOR-i hosted its ninth annual conference, with talks from a wide variety of speakers from the UK — including some of its own PhD students and alumni — and from overseas. We listened to 12 presentations, so for the sake of brevity I will mention all of them but only go into more detail for a few.

We were welcomed to the conference by Prof. Kevin Glazebrook who spoke a bit about STOR-i’s new round of funding and introduced the first speaker: from the .

Jacquillat spoke about analytics in air transportation. In particular he discussed how we can use air traffic flow management to absorb delays upstream by holding planes on the runway so they don’t get held up in the air, expending extra fuel as they wait for their turn to land. He also spoke about the benefits of adjusting existing integer programs for scheduling so that they are optimised to minimise passenger delays. This would give greater priority to larger flights and ensuring connecting flights arrive on time.

The second talk was by third year STOR-i PhD student, Henry Moss, who introduced us to a Bayesian optimisation method called MUMBO, which he has been developing for his thesis.

Next up was from the who talked about her work with the Mallows ranking model as well as some applications and recent advances in the area.

After lunch we were given two more talks. The first by , from the , and the other by Georgia Souli, a third year STOR-i PhD student, before another break for refreshments.

We came back to a presentation titled “The Use of Shape Constraints for Modelling Time Series of Counts” by from Columbia University.

, a STOR-i alumni then talked to us about his work at using machine learning to detect fraud. One of the major problems here is that machine leaning requires data to help it learn but as the nature of fraud changes the algorithm must be able to adapt. The difficulty here is that since all the obvious fraud attempts are blocked future iterations will have no experience of them and so will have difficulties detecting them. Flowerdew suggested that allowing suspected fraudulent transactions be completed with some small probability and then proportionally increasing the weight  of these outcomes in the learning stage would allow the algorithm to learn more effectively and therefore prevent more fraud in the long run.

Tom Flowerdew at the STOR-i Conference

The final presentation of the day was: “Making random things better: Optimisation of Stochastic Systems” by from the .

We reconvened in the evening to look at posters made by the PhD students about each of their projects. This was a really good opportunity for them to develop their presentation skills by explaining their findings to knowledgeable academics in closely related fields. It was also an opportunity for us MRes students to learn a bit more about the research going on at the university and the sort of projects we might be interested in.

The following day we kicked off with a presentation by from . This focused on using the network of transactions between small and mediums sized businesses to improve credit risk models. Since transaction network data is difficult to get hold of, she also spoke about what approaches one can use without access to this data.

Next up was from who spoke about the balance between accuracy and interpretability for data science models and how this can be achieved.

Another STOR-i alumni, Ciara Pike-Burke, then talked about her recent work with multi-armed bandits. A multi-armed bandit can be thought of as a slot machine where pulling each arm will give a reward from some unknown distribution.  The usual problem is balancing exploration to learn more about the different reward distributions for each arm while also trying to maximise the total rewards by exploiting the arm that is performing best. The reward distributions are usually constant but Pike-Burke considered the case where the rewards are dependent on the previous actions of the player. For example a company can suggest different products to a customer on their website and the reward is dependent upon whether the customer follows that link. If the customer has just bought a bed they are probably less likely to buy another bed. However, that same customer might be more likely to buy new pillowcases.

Finally from presented his talk on “Model Based Clustering with Sparse Covariance Matrices.”

]]>
Every STOR-i has a beginning /stor-i-student-sites/edward-mellor/2020/01/20/every-stor-i-has-a-beginning/ Mon, 20 Jan 2020 15:00:30 +0000 http://www.lancaster.ac.uk/stor-i-student-sites/edward-mellor/?p=100 Read more]]> Hello world!

Welcome to my blog! I have just started my second term here in Lancaster so in my very first post I wanted to talk a bit about my STOR-i experience so far: both with regards to academic life but also the extra-curricular experiences STOR-i has provided.

I was officially inducted into the MRes programme in late September as part of a welcome day. The main purpose of this day was for us to meet the rest of our cohort as well as the rest of the STOR-i family. After a tour of the facilities we were each allocated a locker, a shiny new laptop and a first year PhD student as a mentor. We were also given a talk explaining some of the changes to the programme since last year which involved restructuring several of the modules which we would be starting in the following week.

Before starting these modules however we were taken on a two day team building trip to the Lake District along with the first year PhDs. During this trip we enjoyed a variety of different activities ranging from creating golf courses from upcycled materials to yacht sailing. For each of these activities we were split into different groups and so by the end of our time in the Lake District we had been able to get to know everyone pretty well.

The programme then proceeded with five weeks of lectures. Our four taught modules were:

  • Probability and Stochastic Processes
  • Inference and Modelling
  • Stochastic Simulation
  • Deterministic Optimisation

These were quite fast paced and provided a solid foundation of knowledge that we could build on throughout the rest of the term. Each of us had our own strengths and weaknesses but were able to pool our collective experiences to support each other through the process.

The next four weeks of term were taken up by a series of contemporary topic sprints. At the beginning of each week we were given a lecture introducing us to a new area within statistics or operational research. We then divided into groups and spent the next few day delving deeper into that area with the goal of reporting back at the end of the week in the form of group presentations. The four topic areas were Decision Theory, Changepoint Detection, Markov chain Monte Carlo and Stochastic Optimisation. In the final week of term we were tasked with producing an individual report on one of these topic areas. My report focused on Markov chain Monte Carlo and in particular a method which uses approximation to reduce the computational cost of an existing algorithm.

In addition to our assessed modules we also had the opportunity to learn about what the PhD students were doing. This was done both informally, by talking to them during breaks, lunchtimes and outside of working hours, but also more formally in weekly Forums where the PhDs student took turns to present their research. These talks were usually be followed either by tea, coffee and biscuits or, in the build up to Christmas, the STOR-i Bake-off.

To celebrate the end of term we were all invited out for a meal and drinks with the PhD students and members of staff.

Since arriving back for my second term I have attended the annual STOR-i conference which I will talk about in my next post.

]]>