(read the full NYT review here)
(read the full NYT review here)
If you enjoy mathematics as well as the BBC quiz series Pointless, hosted by Alexander Armstrong and Richard Osman, then you will most definitely enjoy the following blog post by Mathistopheles:
This article is based on a gloriously irrelevant mathematical sequence that is derived (rather appropriately) from the episodes of the television show Pointless. It is the sort of idea that has me scribbling calculations on the back of envelopes for hours on end, despite there being absolutely no hope of an outcome that could in any way justify this investment of effort. In this first part, I introduce the sequence and explain how it is related to some well-known mathematical objects called Markov chains. In the vain hope that I might convey my enthusiasm for this topic to others, I have tried to write this piece in a fairly accessible way. Almost no mathematical knowledge is assumed, beyond a rough idea of what probability is.
It provides a clear and accessible analysis of how the quiz show works by using directed graphs, matrices and Markov chains: read the full post here.
Somehow, I don’t think solving this problem will do much for the couple…
(HT Thanks, Textbooks)
Inspired by a recent discussion on the wonders of , I started thinking about how easy it would be to generate prime numbers in . Well, unsurprisingly, it was presented as an example by Knuth using trial division in The TeXbook (download) in 1984:
\documentclass{article} \newif\ifprime \newif\ifunknown % boolean variables \newcount\n \newcount\p \newcount\d \newcount\a % integer variables \def\primes#1{2,~3% assume that #1 is at least 3 \n=#1 \advance\n by-2 % n more to go \p=5 % odd primes starting with p \loop\ifnum\n>0 \printifprime\advance\p by2 \repeat} \def\printp{, % we will invoke \printp if p is prime \ifnum\n=1 and~\fi % ‘and’ precedes the last value \number\p \advance\n by -1 } \def\printifprime{\testprimality \ifprime\printp\fi} \def\testprimality{{\d=3 \global\primetrue \loop\trialdivision \ifunknown\advance\d by2 \repeat}} \def\trialdivision{\a=\p \divide\a by\d \ifnum\a>\d \unknowntrue\else\unknownfalse\fi \multiply\a by\d \ifnum\a=\p \global\primefalse\unknownfalse\fi} \begin{document} % usage The first 100 prime numbers are:~\primes{100} \end{document}
You can also do it by sieving; check out the examples in my GitHub repo.
Found Turing’s plaque today near King’s College, Cambridge (his alma mater).
There has been much discussion online of yesterday’s CiF article by Simon Jenkins (For Britain’s pupils, maths is even more pointless than Latin). Click-bait aside, he has been here before; ignoring the derivation of the now-pervasive “x is the new Latin” meme, as well as overlooking the majority of the straw men and other logic fallacies, the main thrust of the article presents a false dichotomy. It appears to reiterate an antiquated Two Cultures-type of divide between mathematics and “creativity and social and emotional capacities” (which also frequently crops up in discussions on programming and computer science education). Furthermore, it implies the drive to reform mathematics education in the UK is ultimately misguided, with few jobs requiring advanced mathematical skills (STEM agenda? No thank you!), and we would be better served by focusing on numeracy as well as encouraging “key industries”:
If British schools are to be slaves to Gove’s economic dogma, they should be turning out accountants, lawyers, administrators and salespeople. That is where the money is. Britain needs literate and presentable young people, sensitive to culture and the world around them, skilled in health, entertainment, finance, the law and citizenship. The truth is that Gove, like most of Cameron’s ministers, is an old socialist planner at heart.
Now, this is not to say that there are no issues with mathematics education in the UK; ACME has been arguing for a mathematics curriculum fit for the 21st century, supported by Ofsted and reports highlighting the importance of mathematics in the other sciences. Conrad Wolfram has long maintained we have the wrong focus in how we teach mathematics — in a similar way for computer science, contexts and problems must come first. I have long maintained it is socially acceptable to be bad at mathematics — it is rare for people to publicly admit they are unable to read or write, but happily proclaim a lifelong inability to perform basic calculations.
Jenkins has thus thrown together a ragbag of prejudices (a love of the arts, a dislike of international education markers, a sympathy for progressive education) with personal anecdote and concocted an argument completely detached from reality. As epitomised by this quote:
I learned maths. I found it tough and enjoyable. Algebra, trigonometry, differential calculus, logarithms and primes held no mystery, but they were even more pointless than Latin and Greek. Only a handful of my contemporaries went on to use maths afterwards.
…which reminds me of this xkcd comic:
From an article by Edward Frenkel in today’s New York Times:
Many mathematicians, when pressed, admit to being Platonists. The great logician Kurt Gödel argued that mathematical concepts and ideas “form an objective reality of their own, which we cannot create or change, but only perceive and describe”. But if this is true, how do humans manage to access this hidden reality?
We don’t know. But one fanciful possibility is that we live in a computer simulation based on the laws of mathematics — not in what we commonly take to be the real world. According to this theory, some highly advanced computer programmer of the future has devised this simulation, and we are unknowingly part of it. Thus when we discover a mathematical truth, we are simply discovering aspects of the code that the programmer used.
This hypothesis is by no means new; in Are you living in a computer simulation, Nick Bostrum argues that one of the following propositions is true:
Also see: Constraints on the Universe as a Numerical Simulation.
In 1990, Spanish philosopher Jon Perez Laraudogoitia submitted an article to the journal Mind entitled “This Article Should Not Be Rejected by Mind”. In it, he argued:
- If statement 1 in this argument is trivially true, then this article should be accepted.
- If statement 1 were false, then its antecedent (“statement 1 in this argument is trivially true”) would be true, which means that statement 1 itself would be true, a contradiction. So statement 1 must be true.
- But that seems wrong, since Mind is a serious journal and shouldn’t publish trivial truths.
- That means statement 1 must be either false or a non-trivial truth. We know it can’t be false (#2), so it must be a non-trivial truth, and its antecedent (“statement 1 in this argument is trivially true”) is false.
- What then is the truth value of its consequent, “this article should be accepted”? If this were false then Mind shouldn’t publish the article; that can’t be right, since the article consists of a non-trivial truth and its justification.
- So the consequent must be true, and Mind should publish the article.
They published it. “This is, I believe, the first article in the whole history of philosophy the content of which is concerned exclusively with its own self, or, in other words, which is totally self-referential”, Laraudogoitia wrote. “The reason why it is published is because in it there is a proof that it should not be rejected and that is all”.
(reblogged from Futility Closet)
Catch The Joy of Logic by Dave Cliff on iPlayer before it disappears! Programme blurb:
A sharp, witty, mind-expanding and exuberant foray into the world of logic with computer scientist Professor Dave Cliff. Following in the footsteps of the award-winning ‘The Joy of Stats’ and its sequel, ‘Tails You Win — The Science of Chance’, this film takes viewers on a new rollercoaster ride through philosophy, maths, science and technology — all of which, under the bonnet, run on logic.Wielding the same wit and wisdom, animation and gleeful nerdery as its predecessors, this film journeys from Aristotle to Alice in Wonderland, sci-fi to supercomputers to tell the fascinating story of the quest for certainty and the fundamentals of sound reasoning itself.
Dave Cliff, professor of computer science and engineering at Bristol University, is no abstract theoretician. 15 years ago he combined logic and a bit of maths to write one of the first computer programs to outperform humans at trading stocks and shares. Giving away the software for free, he says, was not his most logical move…
With the help of 25 seven-year-olds, Professor Cliff creates, for the first time ever, a computer made entirely of children, running on nothing but logic. We also meet the world’s brainiest whizz-kids, competing at the International Olympiad of Informatics in Brisbane, Australia.
‘The Joy of Logic’ also hails logic’s all-time heroes: George Boole who moved logic beyond philosophy to mathematics; Bertrand Russell, who took 360+ pages but heroically proved that 1 + 1 = 2; Kurt Godel, who brought logic to its knees by demonstrating that some truths are unprovable; and Alan Turing, who, with what Cliff calls an ‘almost exquisite paradox’, was inspired by this huge setback to logic to conceive the computer.
Ultimately, the film asks, can humans really stay ahead? Could today’s generation of logical computing machines be smarter than us? What does that tell us about our own brains, and just how ‘logical’ we really are…?
(you might also like this In Our Time programme on the history of logic from 2010 or this BBC Science Café programme on logic I was a guest on in 2011)
Everyone likes algorithms, especially novel sorting algorithms. So the basis for this “sleep sort” is simple: take the first element of the array (of positive integers), fork a new process which sleeps seconds then displays that number. Repeat for the next element.
#!/bin/bash function f() { sleep "$1" echo "$1" } while [ -n "$1" ] do f "$1" & shift done wait
Calculation of the average (and worst-case) complexity of this algorithm is left as an exercise to the reader; you might also enjoy bogosort and stooge sort (as well as some dancing).
A useful video (start from c.2mins) for explaining Diffie-Hellman(-Merkle) key exchange: a method to allow two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure communications channel. This key can then be used to encrypt subsequent communications using a symmetric key cipher.
It also highlights the cryptographic value of a one-way function and the difficulty of finding the discrete logarithm.
(N.B. There is also some relevant content — albeit for a younger audience — in Chris Bishop‘s 2008 Royal Institution Christmas Lectures)
The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length…Further, the dignity of the science itself seems to require that every possible means be explored for the solution of a problem so elegant and so celebrated.
Disquisitiones Arithmeticae (1801)
Carl Friedrich Gauss (1777-1855)
Big whorls have little whorls
That feed on their velocity,
And little whorls have lesser whorls
And so on to viscosity.
The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite.
Life is not an illogicality; yet it is a trap for logicians. It looks just a little more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait.
Chapter VI: The Paradoxes of Christianity, Orthodoxy (1908)
G. K. Chesterton (1874-1936)
This started out as a list of top Computer Science blogs, but it more closely resembles a set: the order is irrelevant and there are no duplicate elements; membership of this set of blogs satisfies all of the following conditions:
N.B. I have deliberately excluded blogs primarily focusing on computer science education (for another time).
John’s blog cuts across using computing, programming and mathematics to solve real-world problems, pulling in his wide expertise as a mathematics professor, programmer, consultant, manager and statistician. Some great posts across the technical and socio-technical spectrum. Also runs a number of useful Twitter tip accounts, including @CompSciFact, @UnixToolTip, @RegexTip and @TeXtip.
Anthony is Dean of the Faculty of Engineering Sciences at UCL, having previously been the Head of the UCL Computer Science. His regular blog posts are an insightful and thought-provoking journey across computer science, engineering, research and academia.
Since 2002, the first major theoretical computer science blog; computational complexity and other fun stuff in mathematics and computer science.
Daniel Lemire is a professor in the Cognitive Computer Science research group at LICEF in Canada, with his popular blog covering topics across his research areas (databases, data warehousing, information retrieval and recommender systems), as well as programming, education, economics and open science.
This is a blog on and other questions in the theory of computing, named after the famous letter that Gödel wrote to von Neumann which essentially stated the question decades before Cook and Karp. Defined by the authors as a personal view of the theory of computation, it talks about the “who” as much as the “what”.
Moshe Vardi, a distinguished and award-winning theoretical computer scientist, has served as Editor-in-Chief of Communications of the ACM since 2008, discussing a wide range of topics across computer science, research and technology. Certainly worth following on Twitter too.
Alan is the Hewlett-Packard Professor of Electronic Engineering at UWE and his blog is mostly, but not exclusively, about robots. It also touches upon artificial intelligence, artificial culture, ethics and biology, highlighting his definition of robotics as both engineering and experimental philosophy.
This site deals with issues directly related to programming languages and programming language research, as well as forays to bordering issues such as programmability and language in general. This is a community, but not for specific programming problems in some language; unfounded generalisations about programming languages are usually frowned on.
The Communications site publishes two types of blogs: the on-site BLOG@CACM expert blogs, as well as a blogroll of syndicated blogs, essentially covering the spectrum of computer science, research, education and technology. Something for everyone!
The latest news on Google research, focusing on some of their key areas of interest: e-commerce, algorithms, HCI, information retrieval, machine learning, data mining, NLP, multimedia, computer vision, statistics, security and privacy.
Clearly this set is incomplete — please post your computer science research blog recommendations in the comments below; I’d be particularly interested in blogs covering compilers, concurrency and computer architectures.
11:15, restate my assumptions:
1. Mathematics is the language of nature.
2. Everything around us can be represented and understood through numbers.
3. If you graph these numbers, patterns emerge. Therefore: there are patterns everywhere in nature.Maximillian Cohen, π
To those who do not know mathematics, it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature…If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in.
The Character of Physical Law (1965)
Richard Feynman
This term I have been teaching a new first year undergraduate module, Mathematics for Computing, in which I have been trying to impart a little bit of love for mathematics. While we have covered a number of underpinning topics relevant to computer science, such as propositional logic, set theory and number theory, I have also tried to show that there are a multitude of clever little tricks that can make arithmetic and performing seemingly complex calculations that little bit easier. And in doing so, I was reminded of the mathematical prowess of Richard Feynman as well as Hans Bethe, Nobel laureate in physics and Feynman’s mentor during the Manhattan Project. Bethe is one of the few scientists who can make the claim of publishing a major paper in his field every decade of his career, which spanned nearly 70 years; Freeman Dyson called Bethe the “supreme problem solver of the 20th century.”
An example of Bethe’s mastery of mental arithmetic was the squares-near-fifty trick (taken from Genius: The Life and Science of Richard Feynman by James Gleick):
When Bethe and Feynman went up against each other in games of calculating, they competed with special pleasure. Onlookers were often surprised, and not because the upstart Feynman bested his famous elder. On the contrary, more often the slow-speaking Bethe tended to outcompute Feynman. Early in the project they were working together on a formula that required the square of 48. Feymnan reached across his desk for the Marchant mechanical calculator.
Bethe said, “It’s twenty-three hundred.”
Feynman started to punch the keys anyway. “You want to know exactly?” Bethe said. “It’s twenty-three hundred and four. Don’t you know how to take squares of numbers near fifty?” He explained the trick. Fifty squared is 2,500 (no thinking needed). For numbers a few more or less than 50, the approximate square is that many hundreds more or less than 2,500. Because 48 is 2 less than 50, 48 squared is 200 less than 2,500 — thus 2,300. To make a final tiny correction to the precise answer, just take that difference again — 2 — and square it. Thus 2,304.
Bethe’s trick is based on the following identity:
For a more intuitive geometric proof of this formula, imagine a square patch of land that measures on each side:
Its area is , which is the value we are looking for. As you can see in the diagram above, this area consists of a 50 by 50 square (which contributes the to the formula), two rectangles of dimensions 50 by x (each contributing an area of , for a combined total of ), and finally the small x by x square, which gives an area of , the final term in Bethe’s formula.
While Feynman had internalised an apparatus for handling far more difficult calculations (for which he became famous for at Los Alamos, such as summing the terms of infinite series or inventing a new and general method for solving third-order differential equations), Bethe impressed him with a mastery of mental arithmetic that showed he had built up a huge repertoire of these easy tricks, enough to cover the whole landscape of small numbers. Bethe knew instinctively that the difference between two successive squares is always an odd number (the sum of the numbers being squared); that fact, and the fact that 50 is half of 100, gave rise to the squares-near-fifty trick.
Unfortunately, the skill of mental arithmetic that did so much to establish Bethe’s (as well as Feynman’s) legend was doomed to a quick obsolescence in the age of machine computation — it is very much a dead skill today.