Looking up (and back)
"Of all discoveries and opinions, none may have exerted a greater effect on the human spirit than the doctrine of Copernicus. The world had scarcely become known as round and complete in itself when it was asked to waive the tremendous privilege of being the centre of the universe. Never, perhaps, was a greater demand made on mankind – for by this admission, so many things vanished in mist and smoke! What became of Eden, our world of innocence, piety and poetry; the testimony of the senses, the conviction of a poetic-religious faith? No wonder his contemporaries did not wish to let all this go and offered every possible resistance to a doctrine which in its converts authorised and demanded a freedom of view and greatness of thought so far unknown, indeed not even dreamed of"[1].
Johann Wolfgang von Goethe (1749-1832) |
The origins of modern science are to be found in a period of a little over a hundred years between 11 November 1572, when the Danish astronomer Tycho Brahe espied a new star or nova in the heavens, and 1704 when Isaac Newton published his Opticks, which demonstrated that white light is made up of all the colours of the rainbow that you can split it into its component colours with a prism, and that colour inheres in light, not in objects[2].
This is the viewpoint of David Wootton in his epic tome The invention of Science – A New History of the Scientific Revolution [3]. For Wootton, the real revolution in science in general and astronomy in particular began with Brahe’s nova, which suddenly appeared as the brightest object in the heavens (apart from the sun and the moon), brighter even than Venus, and then slowly faded away over 16 months. This flew in the face of the standard Aristotelian philosophy that there could be no change in the heavens.
As an event, this was more significant than the publication in 1543 of Copernicus’ De Revolutionibus (‘On the Revolutions’) which propounded the revolutionary idea that the sun was the centre of the universe and that the Earth revolved about it, as indeed did all the other planets, because very few sixteenth century astronomers actually accepted Copernicus’ claim that the Earth revolves around the sun instead of standing still at the centre of the universe, with the result that the so-called Copernican Revolution was in fact delayed until the seventeenth century[4].
Delayed though it may have been, no one could doubt its significance when it in fact did arrive. The centrepiece of Aristotle’s and Ptolemy’s world view was that the sun and the moon performed circular orbits around a stationary earth[5]. The Aristotelian universe was divided into two distinct regions. The sub-lunar region was the inner region, extending from the central earth to just inside the moon’s orbit. The super-lunar region was the remainder of the finite universe, extending from the moon’s orbit to the sphere of the stars, which marked the outer boundary of the universe. Nothing existed beyond this outer sphere, not even space, so unfilled space was an impossibility in the Aristotelian system, and all celestial objects in the super-lunar region were made of an incorruptible element called Ether. Ether possessed a natural propensity to move around the centre of the universe in perfect circles.
This basic idea was modified and extended in Ptolemy’s astronomy. Since observations of planetary positions at various times could not be reconciled with earth-centred orbits, Ptolemy introduced further circles, call epicycles, into the system. Planets moved in circles, or epicycles, the centres of which moved in circles around the earth. These orbits could be further refined by adding epicycles to epicycles and so on in such a way that the resulting system was compatible with the observations of planetary positions and capable of predicting future planetary positions. This “truth” of a stationary earth around which the sun and the moon and the stars revolved was accepted by Western Christendom for well over a thousand years because it still left room in the universe behind the fixed stars to accommodate the Christian concepts of heaven and hell[6].
As mentioned, Copernicus’ dismantling of this well entrenched view of the cosmic status quo was actually accepted by very few at the time, because it seemed an affront to common sense. If the earth moved, you would feel the wind in your face. If you dropped an object from a tall tower, it would fall towards the west.[7] Opposition to Copernicus’ revolutionary idea came not only from the Church. By 1583, it is said only three competent astronomers in the whole of Europe accepted what Copernicus had to say.[8]
However, in 1608 the telescope was invented in the Netherlands by Hans Lippershey and the following year, Galileo Galilei, an Italian astronomer, physicist, engineer, philosopher, and mathematician, worked out how to make a 30-power one of his own and pointed it to the heavens. He saw that the moon had shadows which we now know to be representative of mountains. In 1611 he saw that Venus had craters, but in the intervening year, he saw what he first thought were moving stars, but after maintaining his observations over several days, he realised that he was in fact seeing moons orbiting Jupiter. This was significant because until that point in time virtually everyone thought that all heavenly bodies revolved around the earth
It might be thought that Copernicus had already established the contrary, but even in Galileo’s time, no more than a dozen serious astronomers had given up on belief in an unmoving earth, and for a long time afterwards, astronomers still had compelling scientific reasons to doubt Copernicus[9].
The principal reason was the issue of star size. When we look at a star in the sky, it appears to have a small fixed width. Knowing this width and the distance to the star, simple geometry reveals how big the star is. In geocentric models of the universe, the stars lie just beyond the planets, implying that star sizes are comparable to that of the sun. But Copernicus’ heliocentric theory demands that the stars be extremely far away. This in turn implied that they should be extremely large, in fact hundreds of times bigger than the sun. Copernicans could not explain this anomaly without appeals to divine intervention. In reality, the stars are far away, but their apparent width is an illusion, an artefact of the way light behaves when it enters a pupil or telescope, behaviour that scientists would not understand for another 200 years.
By 1674, a growing majority of scientists accepted Copernicanism, although to a degree they still did so in the face of scientific difficulties. Nobody recorded the annual stellar parallax until Friedrich Bessel did so in 1838. Around that time, George Airy produced the first full theoretical explanation for why stars appear to be wider than they actually are, and Ferdinand Reich first successfully detected the deflection of falling bodies induced by Earth’s rotation. Isaac Newton’s physics completed the picture[10].
So back in Galileo’s day, those opposed to Copernican theory had some quite respectable, coherent, observationally based science on their side. They were eventually proved wrong, but that did not make them bad scientists[11]. However, within a few years of Galileo’s telescopic discoveries, Galileo metamorphosed into “the Columbus of astronomy” and no one disputed that the moon had mountains, Jupiter had moons, Venus had phases and the sun had spots[12]. Galileo also conducted experiments and published on the parabolic path of projectiles (1592) and the laws of accelerated motion governing falling bodies (1604)[13]. In 1633, his publications were perceived as offending the Pope, he was tried by the Inquisition, found "vehemently suspect of heresy", forced to recant, and spent the rest of his life under house arrest.
Meanwhile, a contemporary of Galileo’s, Johannes Kepler (1571-1630) published his three laws of planetary motion which came to form the foundation of modern astronomy, the first of which was that the orbits of the planets were not circular around the sun as everyone had thought but elliptical. His second law was his so-called Law of Areas: that a planet moves round the sun so that the line from the sun to the planet 'sweeps out' equal areas in equal periods of time. His third law was his s0-called 'Law of Harnonies' that the squares of the periods (or years) of any two planets are proportional to the cubes of their average distances from the sun [13A]. However Kepler was unable to explain why the planets moved as they did.
It was Newton who shed light on this by applying his own law of centrifugal force to Kepler’s third law of planetary motion (the law of harmonies). Whereas Galileo had shown that objects on earth were “pulled” towards its centre, Newton was able to prove that this same force, gravity, affected the orbits of the planets. Book 1 of his Principia Mathematica (1687) described his laws of motion, the first of which is that every body perseveres in its state of resting or uniformly moving in a straight line, unless it is impelled to change that state by forces impressed upon it. In Book 3, by applying the laws of motion to the physical world, he propounded his law of universal gravitation that all matter is mutually attracted with a force directly proportional to the products of their masses and inversely proportional to the square of the distance between them. “Newton, by a single set of laws, had united the earth with all that could be seen in the skies”[14].
The Scientific Revolution endows science with a new methodology[15]
The scientific revolution brought with it a change in the methodology of science. Traditionally philosophy had concerned itself with scientia (true knowledge). Aristotelian philosophers searched backwards assuming that Aristotle had known everything that needed to be known. They looked for certainties and were preoccupied with the question of whether science was ‘true’.
Columbus’ discovery of America in 1492 was a seminal event in the sense that it was a precondition for the invention of the new science, because before that, there was no clear-cut and well-established idea of the concept of ‘discovery’, notes Wootton[16]. The discovery of America was crucial in legitimising innovation, because no one disputed that it was an unprecedented event and it could not be ignored. Before Columbus, the primary objective of Renaissance intellectuals was to recover the lost culture of the past, not to establish new knowledge of their own. The arguments of the ancients needed to be interpreted not challenged. That world changed forever post-Columbus and discovery and innovation became the signposts of the new science[17].
Since the Scientific Revolution, modern science, conscious of its own frailties, has concerned itself with concepts that enable the successful prediction of the outcome of experimental procedures and the identification of processes occurring in the natural world. For many years, mathematicians practising astronomy had been content with mathematical models: hypotheses, theories, which might or might not correspond to reality but which fitted more or less exactly with the phenomena they were observing. The new theory that Boyle announced relating to the pressure of gases in 1662 (that the pressure and volume of a gas have an inverse relationship at a constant temperature) and Newton’s new theory of light (1672) were not explanations – they did not answer the question why. They were concepts that enabled one successfully to predict the outcome of experimental procedures and to identify processes in the natural world.
The early practitioners of the new science sought an escape from the old notion of true knowledge (scientia) and its replacement with the concept of viable theory. Where the old philosophy laid claim to indisputable certainties, the new one modelled itself on astronomy and the law, disciplines in which facts and evidence had long been marshalled in order to generate reliable, even incontrovertible, hypotheses and theories.
Modern science laid its foundation on a set of intellectual tools encapsulating new ways of thinking, the key foundation stones of which were ‘facts’, ‘observations’, ‘experiments’ giving rise to concepts or conceptual schemes known as ‘theories’, leading in turn to new rounds of experiments and observations. Science became an interactive process between ‘theory’ on the one hand and ‘observations’ (experience) on the other. The concepts involved are contingent, fallible, and imperfect, yet they make possible reliable and robust knowledge[18]. What mattered for a Galileo, or a Pascal, or a Boyle or a Newton was being able to make successful predictions where such predictions had previously been impossible. From the 1640s, with the triumph of experiment, the sort of evidence that had been good enough for lawyers for centuries – evidence of clues, or facts, of circumstantial evidence and probability - began to be good enough for mathematicians such as Pascal when doing physics. A posteriori reasoning from experience supplanted a priori reasoning independent of particular experiences. The very reason why the new science made progress and the old philosophy did not is that it was conscious of being imperfect and incomplete.
The new science in a nutshell[19]
The net result of all this is that modern science works by a process involving the observation of phenomena (‘facts’), the advancement of a theory or hypothesis which appears to fit the observed phenomena, the making of predictions concerning that hypothesis, and the testing of those predictions on a wider plane by experiment and modelling to see whether the hypothesis involved has wider ramifications. If the predictions are not borne out, the hypothesis is promptly discarded. When a hypothesis has stood for sufficient length of time without being disproved it becomes part of orthodox scientific thought, but ever remains subject to review as new facts, evidence and observations come to light. Considered in this way, all scientific theories are tentative and provisional.
As David Wootton says in his concluding words: “It is now difficult to think our way back to a world where people did not speak of facts, hypotheses and theories, where knowledge was not ground in evidence, where nature did not have laws. The Scientific Revolution has become almost invisible simply because it has been so astonishingly successful”[20].
[1] Citation from Stephen Hawking, On the Shoulders of Giants – The Great Works of Physics and Astronomy, Penguin, London, 2002, p 6.
[2] Newton formulated his views on the spectrum of light in 1672 soon after he was appointed Lucasian Professor of Mathematics at The University of Cambridge in 1669, but he refused to publish them until after the death of Robert Hooke, Curator of Experiments for the British Royal Society. Hooke held a contrary view: that light travelled in waves and asked Newton to further justify his findings. Newton thereafter resolved to humiliate Hooke in every way and only published Opticks after Hooke’s death in 1703: Hawking, op cit, 728.
[3] Harper Collins, 2015, p 1. The material below, especially the section on the new methodology of science, owes much to Wootton's analysis, except where otherwise indicated.
[4] Ibid, 55.
[5] Aristotle (384-322 BC), Ptolemy (87 – 150 AD). This is a précis of the Aristotelian and Ptolemaic world view as described in A.F. Chalmers What is this thing called Science?, 3rd ed, Hackett, Cambridge. 1999.
[6] Stephen Hawking, On the Shoulders of Giants – The Great Works of Physics and Astronomy, Penguin, London, 2002, 2.
[7] Wootton, op cit, 145.
[8] In Germany, Christoph Rothmann, in Italy, Giovanni Benedetti and in England Thomas Digges: Wootton, p 145.
[9] “The case against Copernicus”, Dennis Danielson and Christopher M Graney, Scientific American, January 2004, 62-67, esp 66-67
[10] Ibid.
[11] Ibid.
[12] Wootton, op cit, 197. The significance of Columbus discovery of America for the new science is referred to below.
[13] Ibid, 199.
[13A] Carlos I Calle, Einstein for Dummies, Kindle Edition, Loc 1129.
[14] Stephen Hawking, On the Shoulders of Giants, op cit, 731. Newton’s view of gravity presupposed an instantaneous attraction between objects. That view stood until Einstein debunked it in his Theory of General Relativity over 200 years later.
[15] The following is drawn from Wooton’s The Scientific Revolution, op cit, Chapter 10, esp pp 394, 396, 397 and 415.unless otherwise indicated.
[16] Op cit, 57-60.
[17] The noun ‘discovery’ first appears in its new sense in English in 1554, the verb ‘to discover’ in 1553, while the phrase ‘voyage of discovery’ was being used by 1554: ibid, 82.
[18] Ibid, p 565.
[19] The formulation below is based upon the succinct resume of Michael O'Callaghan, (BSc (hons), MSc, PhD (Maths) in SAM (Sydney University’s Alumni Magazine, July 2010, 3.
[20] Ibid, p 571.
This is the viewpoint of David Wootton in his epic tome The invention of Science – A New History of the Scientific Revolution [3]. For Wootton, the real revolution in science in general and astronomy in particular began with Brahe’s nova, which suddenly appeared as the brightest object in the heavens (apart from the sun and the moon), brighter even than Venus, and then slowly faded away over 16 months. This flew in the face of the standard Aristotelian philosophy that there could be no change in the heavens.
As an event, this was more significant than the publication in 1543 of Copernicus’ De Revolutionibus (‘On the Revolutions’) which propounded the revolutionary idea that the sun was the centre of the universe and that the Earth revolved about it, as indeed did all the other planets, because very few sixteenth century astronomers actually accepted Copernicus’ claim that the Earth revolves around the sun instead of standing still at the centre of the universe, with the result that the so-called Copernican Revolution was in fact delayed until the seventeenth century[4].
Delayed though it may have been, no one could doubt its significance when it in fact did arrive. The centrepiece of Aristotle’s and Ptolemy’s world view was that the sun and the moon performed circular orbits around a stationary earth[5]. The Aristotelian universe was divided into two distinct regions. The sub-lunar region was the inner region, extending from the central earth to just inside the moon’s orbit. The super-lunar region was the remainder of the finite universe, extending from the moon’s orbit to the sphere of the stars, which marked the outer boundary of the universe. Nothing existed beyond this outer sphere, not even space, so unfilled space was an impossibility in the Aristotelian system, and all celestial objects in the super-lunar region were made of an incorruptible element called Ether. Ether possessed a natural propensity to move around the centre of the universe in perfect circles.
This basic idea was modified and extended in Ptolemy’s astronomy. Since observations of planetary positions at various times could not be reconciled with earth-centred orbits, Ptolemy introduced further circles, call epicycles, into the system. Planets moved in circles, or epicycles, the centres of which moved in circles around the earth. These orbits could be further refined by adding epicycles to epicycles and so on in such a way that the resulting system was compatible with the observations of planetary positions and capable of predicting future planetary positions. This “truth” of a stationary earth around which the sun and the moon and the stars revolved was accepted by Western Christendom for well over a thousand years because it still left room in the universe behind the fixed stars to accommodate the Christian concepts of heaven and hell[6].
As mentioned, Copernicus’ dismantling of this well entrenched view of the cosmic status quo was actually accepted by very few at the time, because it seemed an affront to common sense. If the earth moved, you would feel the wind in your face. If you dropped an object from a tall tower, it would fall towards the west.[7] Opposition to Copernicus’ revolutionary idea came not only from the Church. By 1583, it is said only three competent astronomers in the whole of Europe accepted what Copernicus had to say.[8]
However, in 1608 the telescope was invented in the Netherlands by Hans Lippershey and the following year, Galileo Galilei, an Italian astronomer, physicist, engineer, philosopher, and mathematician, worked out how to make a 30-power one of his own and pointed it to the heavens. He saw that the moon had shadows which we now know to be representative of mountains. In 1611 he saw that Venus had craters, but in the intervening year, he saw what he first thought were moving stars, but after maintaining his observations over several days, he realised that he was in fact seeing moons orbiting Jupiter. This was significant because until that point in time virtually everyone thought that all heavenly bodies revolved around the earth
It might be thought that Copernicus had already established the contrary, but even in Galileo’s time, no more than a dozen serious astronomers had given up on belief in an unmoving earth, and for a long time afterwards, astronomers still had compelling scientific reasons to doubt Copernicus[9].
The principal reason was the issue of star size. When we look at a star in the sky, it appears to have a small fixed width. Knowing this width and the distance to the star, simple geometry reveals how big the star is. In geocentric models of the universe, the stars lie just beyond the planets, implying that star sizes are comparable to that of the sun. But Copernicus’ heliocentric theory demands that the stars be extremely far away. This in turn implied that they should be extremely large, in fact hundreds of times bigger than the sun. Copernicans could not explain this anomaly without appeals to divine intervention. In reality, the stars are far away, but their apparent width is an illusion, an artefact of the way light behaves when it enters a pupil or telescope, behaviour that scientists would not understand for another 200 years.
By 1674, a growing majority of scientists accepted Copernicanism, although to a degree they still did so in the face of scientific difficulties. Nobody recorded the annual stellar parallax until Friedrich Bessel did so in 1838. Around that time, George Airy produced the first full theoretical explanation for why stars appear to be wider than they actually are, and Ferdinand Reich first successfully detected the deflection of falling bodies induced by Earth’s rotation. Isaac Newton’s physics completed the picture[10].
So back in Galileo’s day, those opposed to Copernican theory had some quite respectable, coherent, observationally based science on their side. They were eventually proved wrong, but that did not make them bad scientists[11]. However, within a few years of Galileo’s telescopic discoveries, Galileo metamorphosed into “the Columbus of astronomy” and no one disputed that the moon had mountains, Jupiter had moons, Venus had phases and the sun had spots[12]. Galileo also conducted experiments and published on the parabolic path of projectiles (1592) and the laws of accelerated motion governing falling bodies (1604)[13]. In 1633, his publications were perceived as offending the Pope, he was tried by the Inquisition, found "vehemently suspect of heresy", forced to recant, and spent the rest of his life under house arrest.
Meanwhile, a contemporary of Galileo’s, Johannes Kepler (1571-1630) published his three laws of planetary motion which came to form the foundation of modern astronomy, the first of which was that the orbits of the planets were not circular around the sun as everyone had thought but elliptical. His second law was his so-called Law of Areas: that a planet moves round the sun so that the line from the sun to the planet 'sweeps out' equal areas in equal periods of time. His third law was his s0-called 'Law of Harnonies' that the squares of the periods (or years) of any two planets are proportional to the cubes of their average distances from the sun [13A]. However Kepler was unable to explain why the planets moved as they did.
It was Newton who shed light on this by applying his own law of centrifugal force to Kepler’s third law of planetary motion (the law of harmonies). Whereas Galileo had shown that objects on earth were “pulled” towards its centre, Newton was able to prove that this same force, gravity, affected the orbits of the planets. Book 1 of his Principia Mathematica (1687) described his laws of motion, the first of which is that every body perseveres in its state of resting or uniformly moving in a straight line, unless it is impelled to change that state by forces impressed upon it. In Book 3, by applying the laws of motion to the physical world, he propounded his law of universal gravitation that all matter is mutually attracted with a force directly proportional to the products of their masses and inversely proportional to the square of the distance between them. “Newton, by a single set of laws, had united the earth with all that could be seen in the skies”[14].
The Scientific Revolution endows science with a new methodology[15]
The scientific revolution brought with it a change in the methodology of science. Traditionally philosophy had concerned itself with scientia (true knowledge). Aristotelian philosophers searched backwards assuming that Aristotle had known everything that needed to be known. They looked for certainties and were preoccupied with the question of whether science was ‘true’.
Columbus’ discovery of America in 1492 was a seminal event in the sense that it was a precondition for the invention of the new science, because before that, there was no clear-cut and well-established idea of the concept of ‘discovery’, notes Wootton[16]. The discovery of America was crucial in legitimising innovation, because no one disputed that it was an unprecedented event and it could not be ignored. Before Columbus, the primary objective of Renaissance intellectuals was to recover the lost culture of the past, not to establish new knowledge of their own. The arguments of the ancients needed to be interpreted not challenged. That world changed forever post-Columbus and discovery and innovation became the signposts of the new science[17].
Since the Scientific Revolution, modern science, conscious of its own frailties, has concerned itself with concepts that enable the successful prediction of the outcome of experimental procedures and the identification of processes occurring in the natural world. For many years, mathematicians practising astronomy had been content with mathematical models: hypotheses, theories, which might or might not correspond to reality but which fitted more or less exactly with the phenomena they were observing. The new theory that Boyle announced relating to the pressure of gases in 1662 (that the pressure and volume of a gas have an inverse relationship at a constant temperature) and Newton’s new theory of light (1672) were not explanations – they did not answer the question why. They were concepts that enabled one successfully to predict the outcome of experimental procedures and to identify processes in the natural world.
The early practitioners of the new science sought an escape from the old notion of true knowledge (scientia) and its replacement with the concept of viable theory. Where the old philosophy laid claim to indisputable certainties, the new one modelled itself on astronomy and the law, disciplines in which facts and evidence had long been marshalled in order to generate reliable, even incontrovertible, hypotheses and theories.
Modern science laid its foundation on a set of intellectual tools encapsulating new ways of thinking, the key foundation stones of which were ‘facts’, ‘observations’, ‘experiments’ giving rise to concepts or conceptual schemes known as ‘theories’, leading in turn to new rounds of experiments and observations. Science became an interactive process between ‘theory’ on the one hand and ‘observations’ (experience) on the other. The concepts involved are contingent, fallible, and imperfect, yet they make possible reliable and robust knowledge[18]. What mattered for a Galileo, or a Pascal, or a Boyle or a Newton was being able to make successful predictions where such predictions had previously been impossible. From the 1640s, with the triumph of experiment, the sort of evidence that had been good enough for lawyers for centuries – evidence of clues, or facts, of circumstantial evidence and probability - began to be good enough for mathematicians such as Pascal when doing physics. A posteriori reasoning from experience supplanted a priori reasoning independent of particular experiences. The very reason why the new science made progress and the old philosophy did not is that it was conscious of being imperfect and incomplete.
The new science in a nutshell[19]
The net result of all this is that modern science works by a process involving the observation of phenomena (‘facts’), the advancement of a theory or hypothesis which appears to fit the observed phenomena, the making of predictions concerning that hypothesis, and the testing of those predictions on a wider plane by experiment and modelling to see whether the hypothesis involved has wider ramifications. If the predictions are not borne out, the hypothesis is promptly discarded. When a hypothesis has stood for sufficient length of time without being disproved it becomes part of orthodox scientific thought, but ever remains subject to review as new facts, evidence and observations come to light. Considered in this way, all scientific theories are tentative and provisional.
As David Wootton says in his concluding words: “It is now difficult to think our way back to a world where people did not speak of facts, hypotheses and theories, where knowledge was not ground in evidence, where nature did not have laws. The Scientific Revolution has become almost invisible simply because it has been so astonishingly successful”[20].
[1] Citation from Stephen Hawking, On the Shoulders of Giants – The Great Works of Physics and Astronomy, Penguin, London, 2002, p 6.
[2] Newton formulated his views on the spectrum of light in 1672 soon after he was appointed Lucasian Professor of Mathematics at The University of Cambridge in 1669, but he refused to publish them until after the death of Robert Hooke, Curator of Experiments for the British Royal Society. Hooke held a contrary view: that light travelled in waves and asked Newton to further justify his findings. Newton thereafter resolved to humiliate Hooke in every way and only published Opticks after Hooke’s death in 1703: Hawking, op cit, 728.
[3] Harper Collins, 2015, p 1. The material below, especially the section on the new methodology of science, owes much to Wootton's analysis, except where otherwise indicated.
[4] Ibid, 55.
[5] Aristotle (384-322 BC), Ptolemy (87 – 150 AD). This is a précis of the Aristotelian and Ptolemaic world view as described in A.F. Chalmers What is this thing called Science?, 3rd ed, Hackett, Cambridge. 1999.
[6] Stephen Hawking, On the Shoulders of Giants – The Great Works of Physics and Astronomy, Penguin, London, 2002, 2.
[7] Wootton, op cit, 145.
[8] In Germany, Christoph Rothmann, in Italy, Giovanni Benedetti and in England Thomas Digges: Wootton, p 145.
[9] “The case against Copernicus”, Dennis Danielson and Christopher M Graney, Scientific American, January 2004, 62-67, esp 66-67
[10] Ibid.
[11] Ibid.
[12] Wootton, op cit, 197. The significance of Columbus discovery of America for the new science is referred to below.
[13] Ibid, 199.
[13A] Carlos I Calle, Einstein for Dummies, Kindle Edition, Loc 1129.
[14] Stephen Hawking, On the Shoulders of Giants, op cit, 731. Newton’s view of gravity presupposed an instantaneous attraction between objects. That view stood until Einstein debunked it in his Theory of General Relativity over 200 years later.
[15] The following is drawn from Wooton’s The Scientific Revolution, op cit, Chapter 10, esp pp 394, 396, 397 and 415.unless otherwise indicated.
[16] Op cit, 57-60.
[17] The noun ‘discovery’ first appears in its new sense in English in 1554, the verb ‘to discover’ in 1553, while the phrase ‘voyage of discovery’ was being used by 1554: ibid, 82.
[18] Ibid, p 565.
[19] The formulation below is based upon the succinct resume of Michael O'Callaghan, (BSc (hons), MSc, PhD (Maths) in SAM (Sydney University’s Alumni Magazine, July 2010, 3.
[20] Ibid, p 571.