Posts Tagged ‘Natural language processing’
Turning seemingly innocuous comments into sexual innuendo by adding the words “That’s what she said” (TWSS) has become a (chiefly American, occasionally annoying) cultural phenomenon. Unfortunately, identifying humour and double entendre through software is hard. This is interesting to me from a research perspective: I am interested in the wider area of knowledge representation and reasoning, particularly declarative problem-solving. It is hard to perform sentiment analysis and infer meaning from human language statements that can have non-standard structures, particularly if you want to do it with large-scale datasets (think Twitter, et al.).
For many years, artificial intelligence researchers have been trying to solve the natural language processing (NLP) problem. This field bridges computer science and linguistics and aims to build software that can analyse, understand and generate languages that humans use naturally, so that eventually you will be able to address your computer as though you were addressing another person. Natural language understanding is sometimes referred to as an AI-complete problem, because it requires extensive world knowledge and the ability to manipulate it; to call a problem AI-complete reflects an attitude that it cannot be solved by a simple algorithm. In NLP, the meaning of a sentence will often vary based on the context in which it is presented (since text can contain information at many different granularities), and this is something that is difficult to represent in software. When you add humour, puns and double entendre, this can get substantially harder.
But maybe the first steps have been made: a recent paper (That’s What She Said: Double Entendre Identiﬁcation) by Chloe Kiddon and Yuriy Brun, computer scientists from the University of Washington, presents a software program capable of understanding a specific type of humour, the TWSS problem: “Double Entendre via Noun Transfer” or DEviaNT for short.
Kiddon and Brun’s approach consists of three functions that are used to score words based on a number of sample sentences sourced from either an erotic corpus or from the Brown corpus, the standard used in this field. And this was the part that caught my geek attention: the noun sexiness function, NS(n), rates nouns based on their relative frequencies and whether they are euphemisms for sexually explicit nouns. For example, words with high NS(n) scores include “rod” and “meat”. The two other functions are the adjective sexiness function, AS(a), which detects adjectives such as “hot” and “wet”, and the verb sexiness function, VS(v).
These three functions are used to score sentences for noun euphemisms i.e. does a test sentence include a word likely to be used in an erotic sentence. Other scoring elements include the presence of adjectives and verbs combinations more likely to be used in erotic literature. Finally, they use information such as the number of punctuation and non-punctuation items in sentences. These scores were used to train the WEKA toolkit, an open source collection of machine learning algorithms for data mining tasks. Using this test set they were able to show a high level (around 72% accuracy) of identification of sentences which were suitable for TWSS-style jokes, while keeping false negatives to a minimum: the authors flagging that making the joke when the sentence is not appropriate is much worse than not making the joke when it is appropriate.
While this is preliminary work (the authors will be presenting it at the 49th Annual Meeting of the Association for Computation Linguistics: Human Language Technologies in June), the technique of metaphorical mapping may be generalised to identify other types of double entendres and other forms of humour.
Or maybe it’s just far too big to get a grip on…