YouTube channel, College Humor, has a game show series called 'Um, Actually' - a game of nerdy corrections, and general nerd trivia.
One of the 'shiny questions' involves putting things is order, for example, putting space ships or fictional creatures in order by size.
The way the scoring works (as far as I can tell) is you get one point for each item you put in the right 'position'. So if you guess (1,3,2,4), then you would get 2 points for getting 1 and 4 in the correct position.
The problem was, in the creatures game, there was this one creature which looked like a single-celled organism, but was actually the size of a galaxy (or something), making it the largest.
So suppose you get everything else in the right order, but you fell for the trap and put this surprisingly massive organism as smallest e.g. (2,3,4,1)
In that case, everything is in the wrong position, so no points. But I would argue since all but one are in the right order you should only lose one point.
The FineBros channel does a similar game - e.g. put the top 10 most liked videos of 2019 in order. Under their scoring system, you get 2 points if an entry is in the right position, and 1 point if it's off by one. So (1,3,2,4) would be worth 6 points and (2,3,4,1) would be worth 3 points
This is slightly better, but you still have a case where you can lose significant points for getting just one or two entries out of place. For example, if you had (3,4,5,6,7,1,2), then you would get 0 points, even tho 5 of 7 are in the right order.
So, can we come up with a better scoring system?
Levenshtein Distance
The first things that comes to mind is Levenshtein or 'minimum edit' distance.
This measures the distance between two strings (words) based on the minimum number of edits to get from one to the other. In this context, an edit is a single character addition (cats -> chats), deletion (cats -> cat), or substitution (cats -> bats)
Now, this doesn't seem quite relevant to our problem; we're not adding, deleting, or substituting, we're swapping. For Levenshtein, a swap would be considered a deletion + an addition (or 2 substitutions) e.g. cats -> cas -> cast
We could calculate the Levenshtein distance and just divide it by 2 - effectively treat delete+add (2 edits) as equivalent to a swap (1 edit). Or else, we can just take the general principle of finding the minimum number of swaps required to get from the guess solution to the correct order.
As for the actual score, we can take the number of entries in the list (N) minus the minimum number of swaps.
So for our original examples, (1,3,2,4) is worth 3, and (2,3,4,1) is also worth 3.
In the latter case, 1 wasn't 'swapped' with an actual element, but you could think of it as a swap with an implied 'null' element - (..,null,1,2,3,4,null,..) -> (..,null,null,2,3,4,1,null,...)
Dynamic Programming
Dynamic programming is an approach to solving a certain class of problem which would take a ridiculous amount of time to solve by brute force. With dynamic programming, these problems can be solved in a more reasonable amount of time by breaking them down and solving recursively.
One of the classic problem in dynamic programming is - find the longest increasing sub-sequence in a list of numbers?
For example, in (2,3,1,7,4,9,5,8) the longest increasing sub-sequence would be (2,3,4,5,8). So in this case, we might say the score is 5/8
Going back to our other examples, (1,3,2,4) would have sub-sequence (1,2,4) or (1,3,4), worth 3 in either case. And (2,3,4,1) would be (2,3,4) which is also worth 3
Weighted Error
This is all well and good, but doesn't it feel like (2,3,4,1) is 'more wrong' than (1,3,2,4). Each has one element out of order, but in the former that one element if further from where it's 'supposed' to be.
One correction might be to calculate the error as sum[abs(x-x')] where x is the position of an element, and x' is where it's supposed to be. So (1,3,2,4) would be (abs(1-1) + abs(3-2) + abs(2-3) + abs(4-4)) = 2
However, this is flawed. For the (2,3,4,1) example, the error comes out at 6. Once again, we're being penalised for all the elements that are in the right order but off-by-one
So a slight modification would be to only calculate the error for those elements which are out of place - that is, find the longest sub-sequence, and then calculate the error for all elements which don't belong to that sub-sequence.
For (1,3,2,4) the error would be 1 and for (2,3,4,1) it would be 3
But how do we get from the error to the actual score?
We can start by calculating the maximum error, then subtracting the calculated error from it. So what's the maximum error?
The most wrong you can be would be to get everything in the wrong order - i.e. all elements reversed. So if we have N elements, the error would be
abs(1 - N) + abs(2 - N-1) + ... + abs(N -1) = (N - 1) + (N - 3) + ... + (N - 1) = 4 * (N - 2)
Actually, there's a subtle flaw in this reasoning - even if all the elements are reversed, there is technically a longest sub-sequence of 1, so one of the entries shouldn't count towards the maximum error. The element we chose as the 1 correct one will affect the maximum error. If we chose the first element in the sequence, the max error is reduced by N-1, whereas if we chose one from the middle the max error is reduced by 1 or 0 (depending if there is an odd or even number of elements)
For simplicity, we'll just assume the middle entry and say max error adjustment is 0
So going back once more to our original examples, (1,3,2,4) has a score of 8 - 1 = 7, and (2,3,4,1) has a score of 8 - 3 = 5, making the latter 'more wrong' as desired.
Conclusion
I'm not sure if there was a point to all this. I don't think this is useful outside of scoring this particular kind of game.
It might be interesting to run an neural net or genetic algorithm or similar using this as the score function. I'd be interested to see how a neural net performed at sorting, in terms of performance and correctness.
But anyway,
Chris.
Showing posts with label puzzles. Show all posts
Showing posts with label puzzles. Show all posts
Monday, December 30, 2019
Put the following in order
Labels:
algorithm,
everyday maths,
games,
maths,
over-thinking,
problem solving,
puzzles,
random,
scoring,
sorting
Thursday, January 01, 2015
Lights Out (Game)
I was watching this year's RI Christmas Lectures, and it reminded me of this game I had when I was much younger. It was, I think, a Christmas present from my grandparents, back in '96. It was called Lights Out, and it looked something like this.
Basically, the console had a 5x5 grid of these rubbery, transluscent buttons with little red lights underneath them. When you press a button, that button and the four adjacent buttons switch state - light on to light off, or off to on. So, in the example below (where blue square are lights), if you press the centre square, the centre square turns off and the four adjacent squares turn on.
In the game, you're given a pattern of lights, such as the one above, and the task is to press buttons until all the lights are out.
Interestingly, in the pattern above (on the left), you can clear the board by just pressing the squares that are on at the start (in any order).
Anyway, I was reminded of that. Meanwhile, I'd been learning some Pygame for this other thing (that I'll talk about in a later blog). Now, Pygame is a Python library for making games (amongst other things). So you can probably see where this is going...
Coding a 'Lights Out' clone was straightforward enough - the code is short and not too complicated. The above images are actually screenshots. You can look at my code/download it here. You'll need Python and Pygame to run it.
This is a very threadbare version of the game. You can load a level as a text file of five lines of ones and zeros, like here. Or else you can just play from a blank board - make patterns then try to get rid of them again (not necessarily as easy as it sounds).
If you want something more fancy, or if you don't want to install Python and all that, there are probably loads of playable clones online.
Now, at this point you're probably thinking "Yeah, that's great. But what about the maths of Lights Out?". And you'd be damn right to ask. If you're interested, there's a good summary, as well as links to articles/papers, on the wikipedia page. I don't want to just repeat what's written there. Basically, it all comes down to linear algebra.
Anyway, have fun with that.
Oatzy.
[Call this a late Christmas present.]
Basically, the console had a 5x5 grid of these rubbery, transluscent buttons with little red lights underneath them. When you press a button, that button and the four adjacent buttons switch state - light on to light off, or off to on. So, in the example below (where blue square are lights), if you press the centre square, the centre square turns off and the four adjacent squares turn on.
In the game, you're given a pattern of lights, such as the one above, and the task is to press buttons until all the lights are out.
Interestingly, in the pattern above (on the left), you can clear the board by just pressing the squares that are on at the start (in any order).
Anyway, I was reminded of that. Meanwhile, I'd been learning some Pygame for this other thing (that I'll talk about in a later blog). Now, Pygame is a Python library for making games (amongst other things). So you can probably see where this is going...
Coding a 'Lights Out' clone was straightforward enough - the code is short and not too complicated. The above images are actually screenshots. You can look at my code/download it here. You'll need Python and Pygame to run it.
This is a very threadbare version of the game. You can load a level as a text file of five lines of ones and zeros, like here. Or else you can just play from a blank board - make patterns then try to get rid of them again (not necessarily as easy as it sounds).
If you want something more fancy, or if you don't want to install Python and all that, there are probably loads of playable clones online.
Now, at this point you're probably thinking "Yeah, that's great. But what about the maths of Lights Out?". And you'd be damn right to ask. If you're interested, there's a good summary, as well as links to articles/papers, on the wikipedia page. I don't want to just repeat what's written there. Basically, it all comes down to linear algebra.
Anyway, have fun with that.
Oatzy.
[Call this a late Christmas present.]
Monday, August 11, 2014
How Long Will a Prime Number Be?
File this under things I think about when I'm trying to get to sleep.
The Question
The idea is this - pick a random integer between 0 and 9. If it's prime, stop. If it's not prime, pick another random integer and stick it on the end of the first number. If this new number is prime, stop. If it isn't prime, pick another random integer, and so on.
So, for example:
1 -> not prime
2 -> 12 -> not prime
7 -> 127 -> prime -> stop.
The question is this - on average, how long would we expect the number to be when we get a prime?
Sometime Programming Just Won't Cut It
Normally, this is where I'd bust out Python and write a little program to run through the procedure a few times and see what happens. The problem is, testing whether a number is prime tends to be really slooooow.
The 'easiest' algorithm, trial division - literally checking if the number is divisible by any of the integers less than it - has (worst case) speed O(sqrt(n)). What this means is, each time we add a digit to our number, it will take ~sqrt(10) = 3.16 times longer to check if it's prime. And that time soon adds up. There are quicker algorithms, for example AKS, but they're too complicated to bother with.
Instead, we can figure out the answer with some good old-fashioned maths.
Some Good Old-Fashioned Maths
Lets re-state the problem - what we want to know is the expected length of the prime number. Here we use the standard definition for the expectation value
So now the problem is find p(N), which in this case is the probability that a number of length N is prime. Now, as any romantic mathematician will tell you, prime numbers are mysterious things, which are scattered about the number line seemingly at random. However, what we do have is the 'Prime Number Theorem', which estimates the number of prime numbers smaller than n, as
where ln(n) is the natural logarithm. Here's a Numberphile video explaining the theorem.
So, for one digit numbers (numbers less than 10) there are ~10/log(10) = 4.2 prime numbers (well, technically 4, but this is an estimate). So the probability that a one digit number is prime is estimated as 0.42. The estimate gets better as the number of digits gets bigger.
Now, for numbers of two or more digits, the number of primes that are N digits long is (approximately) the number of primes less than 10^N minus the number of primes less than 10^(N-1). So then, the probability of an N-digit number being prime is the number of N-digit primes divided by the total number of N-digit numbers. It can be shown (homework) that this probability is given by
Plotting this function, we see that the probability goes to zero as N goes to infinity
In other words, the longer the number gets the less likely it is to be prime. In this case, you can probably guess what the expected length of our prime will be.
So we have
And as N->infinity, N p(N) -> 1/ln(10). What this means is the sum doesn't converge - it goes to infinity. So on average we expect our prime to be infinitely long.
So Basically
To cut a long story short, the procedure is most likely to stop when the number is only one digit long. Or else, the longer the number gets, the less likely the procedure is to stop. This is another reason why the programming solution was a non-starter.
Oatzy.
[Now stop doing maths and go to sleep.]
The Question
The idea is this - pick a random integer between 0 and 9. If it's prime, stop. If it's not prime, pick another random integer and stick it on the end of the first number. If this new number is prime, stop. If it isn't prime, pick another random integer, and so on.
So, for example:
1 -> not prime
2 -> 12 -> not prime
7 -> 127 -> prime -> stop.
The question is this - on average, how long would we expect the number to be when we get a prime?
Sometime Programming Just Won't Cut It
Normally, this is where I'd bust out Python and write a little program to run through the procedure a few times and see what happens. The problem is, testing whether a number is prime tends to be really slooooow.
The 'easiest' algorithm, trial division - literally checking if the number is divisible by any of the integers less than it - has (worst case) speed O(sqrt(n)). What this means is, each time we add a digit to our number, it will take ~sqrt(10) = 3.16 times longer to check if it's prime. And that time soon adds up. There are quicker algorithms, for example AKS, but they're too complicated to bother with.
Instead, we can figure out the answer with some good old-fashioned maths.
Some Good Old-Fashioned Maths
Lets re-state the problem - what we want to know is the expected length of the prime number. Here we use the standard definition for the expectation value
So now the problem is find p(N), which in this case is the probability that a number of length N is prime. Now, as any romantic mathematician will tell you, prime numbers are mysterious things, which are scattered about the number line seemingly at random. However, what we do have is the 'Prime Number Theorem', which estimates the number of prime numbers smaller than n, as
where ln(n) is the natural logarithm. Here's a Numberphile video explaining the theorem.
So, for one digit numbers (numbers less than 10) there are ~10/log(10) = 4.2 prime numbers (well, technically 4, but this is an estimate). So the probability that a one digit number is prime is estimated as 0.42. The estimate gets better as the number of digits gets bigger.
Now, for numbers of two or more digits, the number of primes that are N digits long is (approximately) the number of primes less than 10^N minus the number of primes less than 10^(N-1). So then, the probability of an N-digit number being prime is the number of N-digit primes divided by the total number of N-digit numbers. It can be shown (homework) that this probability is given by
Plotting this function, we see that the probability goes to zero as N goes to infinity
In other words, the longer the number gets the less likely it is to be prime. In this case, you can probably guess what the expected length of our prime will be.
So we have
And as N->infinity, N p(N) -> 1/ln(10). What this means is the sum doesn't converge - it goes to infinity. So on average we expect our prime to be infinitely long.
So Basically
To cut a long story short, the procedure is most likely to stop when the number is only one digit long. Or else, the longer the number gets, the less likely the procedure is to stop. This is another reason why the programming solution was a non-starter.
Oatzy.
[Now stop doing maths and go to sleep.]
Labels:
mathematical methods,
maths,
probability,
problem solving,
puzzles
Friday, February 24, 2012
A Problem With Puzzles
Here's a puzzle, tweeted by Aetherling
Have a go at it yourself. Solution below.
You see these sorts of puzzle every now and again. And there are usually common 'tricks' to them.
My first instinct was to add the digits. Nothing. Modulus? Nope. Sum and modulus, multiply digits, square, divide? Nope.
So, what?
For some reason, I decided I was going to try some algebra - ignore the fact that the left-hand sides of the equations are numbers, and treat them as variables with some unknown values. And then, add up whatever those values are to get the numbers on the right of the equations.
i.e. 9313 -> '9' + 2x'3' + '1' = 1; 1111 -> 4x'1' = 0; etc.
So from 1111=2222=3333=5555=7777=0 we get that 1,2,3,5,7 = 0
From 0000=6666=9999=4 we get that 0,6,9 = 1
And from 6855=3, 1+'8'=3 => 8=2
4 is undefined. And that makes the answer to the puzzle 2581=2.
It took me about half an hour to come up with that answer..
"If all you have is a hammer, everything looks like a nail"
That is the right answer. But the puzzle suggests that a preschooler would get the correct answer with ease. And I doubt a preschooler would do so with algebra.
So, what would a preschooler do? And why does it take 'higher educated' people so much longer?
What's the 'correct' answer? Take a look at the below
Basically, it's the number of loops in the digits on the left-hand side. This had to be pointed out to me by Aerliss.
But here's the interesting thing - take a look at the values I assigned to the number.
I got to the same answer, and effectively 'derived' the number of loops in the digits, without realising that that's what I was doing.
Source on the subheading quote - Maslow's Hammer.
It's not a perfect analogy, but the idea is this - the more mathematical training you have, the more techniques you'll be trying to use to solve the problem; possibly causing you to miss the 'obvious' answer. As, indeed, I did.
In this analogy, maths is the hammer.
And, strictly speaking, this puzzle isn't a maths problem - even though it's presented as if it were. And even though it can be solved using maths.
Another Problem
Here's another puzzle, recently posted on Reddit
The problem here is that the correct answer isn't known.
The 'obvious' answer (in as much as the one that most commenters came up with) was to add the four cross numbers, divide by 10, and round to the nearest whole number. In that case, the missing number is (23+20+12+3)/10 = 5.8 ~ 6 (B).
But that doesn't explain the set up - why the crosses? Are the positions of the numbers in the cross significant? And rounding is a little messy.
The most compelling proposed solution, once again, doesn't actually involve any maths:
Finite Rule Paradox
At any rate, what the comment thread shows is, in the absence of a 'true' answer, any answer is potentially correct with some valid justification. Case in point, I've just presented two different possible answers with two valid justifications.
This is an example of Wittgenstein's Finite Rule Paradox.
Stated explicitly - "This was our paradox: no course of action could be determined by a rule, because any course of action can be made out to accord with the rule"
Basically, what I described above..
There's a clearer explanation here. The topic is also a plot point in the novel (and subsequent film adaptation) The Oxford Murders; recommended if you're into maths-fueled murder mysteries.
The paradox is more apparent in 'find the next term in the sequence' puzzles.
For example, if you're given 2, 4, 8, 16 and asked what the next number is, then the obvious answer is 32 - twice the previous number/powers of two.
However, 31 is an equally valid continuation, as Marcus du Sautoy explains:
In fact, you can define a function, f(n), which satisfies the first 4 terms of the above sequence, and will give any number you like for the fifth.
The Easy Path
How is that possible? Lagrange Interpolation.
[Discussed in Professor Stewart's Hoard of Mathematical Treasures]
Lagrange interpolation applies to any (numerical) sequence of any (finite) length; so, there are infinitely many ways to continue any given sequence - and, necessarily, one cannot say with absolute certainty that a given sequence will continue in one particular way, only.
Unless the person who set the puzzle says otherwise. Because if it's their puzzle, they know what answer they're looking for. And if you try to score easy marks on an exam by citing Wittgenstein, you're probably going to fail.
Of course, Lagrange polynomials - although interesting that they exist - are likely to be the least interesting way to continue a sequence. Certainly, the equivalent Lagrange polynomial isn't as meaningful or interesting as the circle division sequence; even though they both give the same first 5 terms.
And a Lagrange polynomial will probably never be the most obvious/aesthetically pleasing solution.
There Must Be Another Way
The important thing is, the existence of Lagrange polynomials 'proves' the Finite Rule Paradox for numerical sequences.
By the same logic, there must exist some functions, f(a,b,c,d), which satisfy the first 20 equations in the first puzzle, but give some values other than 2 for the last. Or, multiple functions which do give 2, but by different methods to the correct one.
Of course, such functions aren't likely to be easy to find. Especially when there are 20 'initial rules' to satisfy. But the fact that there are finitely many rules means it's possible.
Not that counting loops, or manipulating letters count as mathematical functions. But the point still stands.
At any rate, the important thing to remember is not to get too obsessed with the consequences of the Finite Rule Paradox. Or you may end up trying to lobotomise yourself to prove a theory.
Oatzy.
[What comes next - 3, 3, 5, 4, 4, 3, 5, 5, 4, ?]
Have a go at it yourself. Solution below.
You see these sorts of puzzle every now and again. And there are usually common 'tricks' to them.
My first instinct was to add the digits. Nothing. Modulus? Nope. Sum and modulus, multiply digits, square, divide? Nope.
So, what?
For some reason, I decided I was going to try some algebra - ignore the fact that the left-hand sides of the equations are numbers, and treat them as variables with some unknown values. And then, add up whatever those values are to get the numbers on the right of the equations.
i.e. 9313 -> '9' + 2x'3' + '1' = 1; 1111 -> 4x'1' = 0; etc.
So from 1111=2222=3333=5555=7777=0 we get that 1,2,3,5,7 = 0
From 0000=6666=9999=4 we get that 0,6,9 = 1
And from 6855=3, 1+'8'=3 => 8=2
4 is undefined. And that makes the answer to the puzzle 2581=2.
It took me about half an hour to come up with that answer..
"If all you have is a hammer, everything looks like a nail"
That is the right answer. But the puzzle suggests that a preschooler would get the correct answer with ease. And I doubt a preschooler would do so with algebra.
So, what would a preschooler do? And why does it take 'higher educated' people so much longer?
What's the 'correct' answer? Take a look at the below
Basically, it's the number of loops in the digits on the left-hand side. This had to be pointed out to me by Aerliss.
But here's the interesting thing - take a look at the values I assigned to the number.
I got to the same answer, and effectively 'derived' the number of loops in the digits, without realising that that's what I was doing.
Source on the subheading quote - Maslow's Hammer.
It's not a perfect analogy, but the idea is this - the more mathematical training you have, the more techniques you'll be trying to use to solve the problem; possibly causing you to miss the 'obvious' answer. As, indeed, I did.
In this analogy, maths is the hammer.
And, strictly speaking, this puzzle isn't a maths problem - even though it's presented as if it were. And even though it can be solved using maths.
Another Problem
Here's another puzzle, recently posted on Reddit
The problem here is that the correct answer isn't known.
The 'obvious' answer (in as much as the one that most commenters came up with) was to add the four cross numbers, divide by 10, and round to the nearest whole number. In that case, the missing number is (23+20+12+3)/10 = 5.8 ~ 6 (B).
But that doesn't explain the set up - why the crosses? Are the positions of the numbers in the cross significant? And rounding is a little messy.
The most compelling proposed solution, once again, doesn't actually involve any maths:
"As far as I can tell, the answer is C: 7 [...] The formula for the number in the center is the number of unique letters shared by the top and bottom numbers as written in English plus the number of unique letters shared by the right and left numbers.."
Finite Rule Paradox
At any rate, what the comment thread shows is, in the absence of a 'true' answer, any answer is potentially correct with some valid justification. Case in point, I've just presented two different possible answers with two valid justifications.
This is an example of Wittgenstein's Finite Rule Paradox.
Stated explicitly - "This was our paradox: no course of action could be determined by a rule, because any course of action can be made out to accord with the rule"
Basically, what I described above..
There's a clearer explanation here. The topic is also a plot point in the novel (and subsequent film adaptation) The Oxford Murders; recommended if you're into maths-fueled murder mysteries.
The paradox is more apparent in 'find the next term in the sequence' puzzles.
For example, if you're given 2, 4, 8, 16 and asked what the next number is, then the obvious answer is 32 - twice the previous number/powers of two.
However, 31 is an equally valid continuation, as Marcus du Sautoy explains:
"[L]et me explain why 31 can be a perfectly legitimate way to continue the sequence 2, 4, 8, 16 ... Draw three dots on a circle and join the dots with lines. The circle gets divided into four pieces. If you now take four dots on the circle and draw all the lines between the dots then you cut the circle into eight pieces. Five dots leads to 16 pieces. But if you draw all the lines between six dots you will only get 31 pieces rather than the 32 you'd expect."
In fact, you can define a function, f(n), which satisfies the first 4 terms of the above sequence, and will give any number you like for the fifth.
The Easy Path
How is that possible? Lagrange Interpolation.
[Discussed in Professor Stewart's Hoard of Mathematical Treasures]
Lagrange interpolation applies to any (numerical) sequence of any (finite) length; so, there are infinitely many ways to continue any given sequence - and, necessarily, one cannot say with absolute certainty that a given sequence will continue in one particular way, only.
Unless the person who set the puzzle says otherwise. Because if it's their puzzle, they know what answer they're looking for. And if you try to score easy marks on an exam by citing Wittgenstein, you're probably going to fail.
Of course, Lagrange polynomials - although interesting that they exist - are likely to be the least interesting way to continue a sequence. Certainly, the equivalent Lagrange polynomial isn't as meaningful or interesting as the circle division sequence; even though they both give the same first 5 terms.
And a Lagrange polynomial will probably never be the most obvious/aesthetically pleasing solution.
"[H]e conjectured that, though in principle all answers were equally probable, there might be something engraved on the human psyche [...] which guided most people to the same place, to the answer that seemed the simplest, clearest or most satisfying. He was definitely thinking that some kind of aesthetic principle was operating a priori which only let through a few possible answers for the final choice." - The Oxford Murders, Chptr 9Though, as seen in the Reddit puzzle, the simplest answer isn't necessarily the most correct one.
There Must Be Another Way
The important thing is, the existence of Lagrange polynomials 'proves' the Finite Rule Paradox for numerical sequences.
By the same logic, there must exist some functions, f(a,b,c,d), which satisfy the first 20 equations in the first puzzle, but give some values other than 2 for the last. Or, multiple functions which do give 2, but by different methods to the correct one.
Of course, such functions aren't likely to be easy to find. Especially when there are 20 'initial rules' to satisfy. But the fact that there are finitely many rules means it's possible.
Not that counting loops, or manipulating letters count as mathematical functions. But the point still stands.
At any rate, the important thing to remember is not to get too obsessed with the consequences of the Finite Rule Paradox. Or you may end up trying to lobotomise yourself to prove a theory.
Oatzy.
[What comes next - 3, 3, 5, 4, 4, 3, 5, 5, 4, ?]
Labels:
games,
hand-waving,
maths,
nerd,
over-thinking,
philosophy,
problem solving,
puzzles,
random
Subscribe to:
Posts (Atom)








