Say you made a thing. You put a lot of time and effort into your thing, and you're so proud of it, you want to share it with the world. But how much should you charge for it?
If you price it too high no-one will buy it. If you price it too low, you won't make a profit. And you spent far too much time and money on your thing to not turn a profit.
So you go to a good friend - who just so happens to work in marketing - and ask him to do a little market research for you. This friend is a pretty cool guy, so he goes out on to the streets and shows people your product, and asks them how much they'd be willing to pay for one. But being an expert, he does it in such a way as to get honest and unbiased answers.
After an afternoon of efficient (and pro bono) work, he comes back to you with good sample of 750 responses. After discarding 250 who weren't interested in your product, he analyses the remaining 500 responses.
And by some pleasing miracle, he finds that the prices these people are willing to pay approximates a normal distribution, with average £10 and standard deviation £2.50
So what do you charge? £10?
The people who said they would only pay a price less than £10 won't buy it, because they're cheap-skates, and who needs their business anyway. But on the plus side, half the people surveyed - 250 people - said they'd pay £10 or more. So you would expect to make about £2,500 from the sample group.
Which isn't too bad. But can you do better?
You decide, because you're a bit of a smart-arse, to work out a function for your expected return for if you were to charge £x.
So what you do first is integrate your normal distribution function from x to infinity. This gives you the shaded-area under the curve - the proportion of your sample willing to pay £x,
Still with me?
Your resulting function looks like this,
erfc is the complementary error function, but you needn't worry about what that is exactly, because that's what Wolfram Alpha is for. So, proud of yourself for worked that out (somehow), you plot a graph of this function giving you a graph that looks like this
Wolfram Alpha' help again, it's a piece of cake for you find that that peak is at x=£7.73.
This is your best price. Which is a couple of quid less than the average your sample was willing to pay. But if you were to charge this amount, 409 people from your sample would be willing to buy your thing - and that would make you a respectable ~£3,162
And now you sit back in your chair and laugh, because a little maths just made you an extra 660-odd quid. Which isn't bad going.
As it turns out, if you ask people what they'd be willing to pay (and if their responses approximate a normal distribution) then the price that maximises profits - the one that balances per-unit profit, and expected sales numbers - is ALWAYS less than the average of what people are willing to pay.
And cinemas - whose escalating prices are discouraging movie-goers and leading to declining profits - could perhaps learn something from this. But probably won't.
Spherical Cow in a Vacuum
The world, as you may have noticed, is not an ideal place. Life is never so simple.
The central limit theorem says a normal distribution will often suffice (for a large enough sample population), but it's not necessarily going to be the best fit. Or it might be that the results from your sample don't scale to the general public.
But much worse than that is people. People aren't rational, people don't necessarily know what they want, people don't know what things are worth, and people are surprisingly easy to manipulate - to an extent, you can effectively tell people what they want to pay; as anyone in marketing will proudly tell you, while grinning maniacally and eying up your wallet.
So in that vein, I leave you with these two TED talk -
Dan Gilbert on our mistaken expectations
Rory Sutherland: Life lessons from an ad man
[There are lots of other TED Talks on a huge range of subjects. Most worth watching. Some of them are particularly fantastic. Go explore!]
Friday, April 22, 2011
Wednesday, April 20, 2011
Have cookie. All is a little intrusive So close to be Everyone is we haven't blogged in mall bathroom -!If you've ever taken the time to read spam - and you really should - you might have come across something like this; text that almost seems human, but on closer inspection just doesn't quite make sense.
I've mentioned Markov Chains before, for example, in the Snakes and Ladders post. Here's the formal definition from Wikipedia,
A Markov chain, named for Andrey Markov, is a mathematical system that undergoes transitions from one state to another (from a finite or countable number of possible states) in a chain-like manner. It is a random process endowed with the Markov property: the next state depends only on the current state and not on the past.In the context of snakes and ladders, this basically says that the square you'll land on in the next turn depends on the square you're currently on, and the roll of a dice.
What the Dickens?
So what does this have to do with text generating?
Here's a visual example of how it works. These are the first few lines from A Tale of Two Cities as a flowchart,
So for text generation, your 'current state' will be some word - say, 'of' in the above. The Markov text generator then picks a word to follow it. In the above, there are 5 words that could follow 'of', each with an equal chance of being chosen; except for 'times' which is twice as likely as the others, since it appears twice in the source.
It should be noted that it isn't just shuffling the words completely at random - words are chosen based on what they appear together with in the source.
Tweets Go In, Gibberish Comes Out
By feeding different source texts in to a generator, you can get all variety of different results; to some extent matching the style of the source writer. This is what can make the output seem so human, while being so beautifully nonsensical.
In fact, the quote at the very top of the post was generated by "That Can Be My Next Tweet".
The only explanation the site gives as to how it works is,
This page generates your future tweets based on the DNA of your existing messages.But it seems safe to assume, from that and what it outputs, that it's using Markov text generation with your - or in this case my - previous tweets as the source text.
What if I don't really is
He opened the door and got into the car engine shuddered into life and the vehicle lurched down the driveway..That bit of text is actually an extract from "How to write badly well: Forget what you're doing halfway through a sentence", and is human written. But it demonstrates the point well.
In the above text, you have two sentence fragments either side of 'the car'; each makes sense individually, but not when they're put together.
And unfortunately, these sudden changes of direction are a major trip up point for Markov text; especially since this sort of thing can happen multiple times in a single sentence.
Letters and Words
You don't have to chop up the source text word-wise.
You could, for example, run a Markov chain letter-wise (or even by groups of letters).
such gems as - Floridaho, Oklabama, and Flork.
The drawback to this approach is that it's no good at sentences, since what you'll get is likely to be a nonsensical collection of made-up words.
Alternatively, you could work with pairs of words, or indeed n-grams of any size. This has the benefit of creating more readable-text, but at the cost of variation.
Similarly, a smaller source text can produce lots of sentence fragments that never vary, while a large source can will give greater randomness.
It all ultimately comes down to getting a desirable balance between variability and comprehensibility.
Stuff to Play With
I threw together this quick and dirty implementation in python - mostly just to show that I could. I ran it on the first few paragraphs of this post (pre-editing), and got this as an example output,
almost seems human but while no-one is going on some spam was text that has anyone really should you might have postulated that has anyone reallyThat Can Be My Next Tweet, mentioned above.
On TweetCloud, you can enter a word and get a word cloud of words that commonly follow that word in tweets. Word.
Word-O-Matic, also mention above, creates words based on any source text you give it. See also: associated Reddit thread with lots of example results.
Markov Text Synthesizer, a general online generator you can play with.
There are various Twitter-bot attempts here.
Markov Shakespearean Sonnet (uses a slightly more complex generation method)
Someone who explains it better than me.
Oh, and you can also use Markov Chains to generate music.
[Spam is weirdly hard to come by these days.]