Important science and technology findings in 2018

  1. The Gompertz-Makeham law predicts statistically the mortality rate of human beings. The key takeaway is that it is an exponential function. Every few years, the mortality rate of a human being doubles. It is not unique to human beings: most mammals and many other animals have an exponentially rising mortality rate over time. It does not affect all animals, however. Lobsters do not appear to age like we do. Many trees age in reverse, meaning that their mortality rate diminishes over time. In 2018, a scientist studying naked mole rates for decades published an analysis of over 3,000 rats and found that their mortality rate remains constant throughout their life. We do not know why naked mole rates age differently from most other mammals.
  2. Type 1 diabetes is when your pancreas is unable to supply insulin to your cells. Though it can be treated with expensive and inconvenient insulin shots, there is no cure. In 2018, we found that a heart-disease drug can partially reverse type 1 diabetes. This could make some diabetics less dependent on insulin.
  3. Though artificial intelligence has been making a lot of progress in tasks such as image classification and game playing, we are still a long way from being able to intelligently animate human-like body parts like hands. Simple tasks like folding laundry or turning a door knob are still a massive challenge. In 2018, researchers from OpenAI have trained a human-like robot hand to manipulate objects like we would.
  4. Older people tend to have a less efficient immune system. In 2018, we learned that we can at least partially reverse age-related immune-system decline using drugs. It substantially reduces infections in older people.
  5. The human genome project set forth in 1990 to map the human chromosomes. We thought at the time that human beings would have 100,000 genes, but they have only about 25,000 genes. The map was completed in 2003. Yet applications of the human genome project have been scarce. In 2018, the first gene-silencing drug was approved in the USA.
  6. Electrocardiograms (ECG) have been used since the 19th century to monitor human hearts. The first commercially available ECG machines were produced at the beginning of the 20th century but they remain specialized devices mostly just used within hospitals. They are also somewhat invasive. In 2018, Apple released a watch with government-approved ECG (heart monitoring) capabilities.
  7. CRISPR/Cas9 is technique developed in 2012 to edit the genes of living organisms. It is unclear whether it is safe to use it on human beings… Using this technique, a Chinese researcher helped produce the first genetically modified babies, they may be immune to HIV.

Science and Technology links (December 29th, 2018)

  1. Low-dose radiation from A-bombs elongated lifespan and reduced cancer mortality relative to un-irradiated individuals (Sutou, 2018):

    The US National Academy of Sciences (NAS) presented the linear no-threshold hypothesis (LNT) in 1956, which indicates that the lowest doses of ionizing radiation are hazardous in proportion to the dose. This spurious hypothesis was not based on solid data. (…) A-bomb survivors (…) showed longer than average lifespans. Average solid cancer death ratios of(…) A-bomb survivors (…) were lower than the average for Japanese people (…), essentially invalidating the LNT model. Nevertheless, LNT has served as the basis of radiation regulation policy. If it were not for LNT, tremendous human, social, and economic losses would not have occurred in the aftermath of the Fukushima Daiichi nuclear plant accident. For many reasons, LNT must be revised or abolished, with changes based not on policy but on science.

    This is important work. I am surprised at how few people know about hormesis. Many people assume that if you avoid stress, toxins and challenges, you will maximize your health and longevity. That is just flat out wrong.

  2. In climate talks, we use year 1850 as a reference: the implicit goal is to maintain Earth’s global temperature close to the global temperature that existed in year 1850 (say within 1.5 degrees). To my knowledge, nothing makes year 1850 special. In fact, in the absence of both ancient and recent carbon emissions from agriculture and industrialization, current global average temperatures would likely have been about 1.3 degrees lower than they were around 1850 (Vavrus et al., Nature 2018). We were headed down toward another glaciation and, instead, due to human beings, we are headed toward warmer and warmer temperatures. Left unchecked, neither direction is desirable. As argued by Deutsch in the Beginning of Infinity, we have no choice but to accept that there is no such thing as “sustainability” (which assumes an ideal steady state) and that we must learn to engineer our climate.
  3. Human beings do not originate from a single region in Africa: the story of our origin is complicated, involving a mix of different populations and cultures.
  4. McGuff and Litte (2009) write:

    there is no additional physiological advantage afforded to one’s body, including endurance or cardio benefits, by training that lasts more than six to nine minutes a week.

  5. The videogame Fortnite lead to 3 billion dollars in profit for its creators. It has 200 million players.
  6. In older mammals, the skin loses fat. Zhang et al. restored the ability of skin in older mice to store fat, thus making the skin more resistant to some infections.
  7. Students who make friends and study with them tend to do better. This is painfully obvious to anyone who has given serious thought to how schooling works.

Science and Technology links (December 22nd 2018)

  1. For equity reasons, many people advocate for double-blind peer review, meaning that the author does not know who the reviewer is, nor does the reviewer know who the author is. It is believed (despite little hard positive evidence and some contrary evidence) that this is surely benefificial to female authors and minorities. Cox and Montgomerie find that:

    These analyses suggest that double-blind review does not currently increase the incidence of female authorship in the journals studied here. We conclude, at least for these journals, that double-blind review does not benefit female authors and may, in the long run, be detrimental.

    Why do they say it may be detrimental?

    Firstly, making everything anonymous is hard work. And ressources are finite: we are already struggling to find time and people to review manuscripts… adding to the burden has a cost. Thus we must ensure that there are comparable benefits. You shouldn’t think for a minute that making science harder and more expensive is going to necessarily benefit women and minorities. Raising the costs usually works against inclusion.

    Secondly, they observe empirically that publications by women are less likely to appear in double-blind-review journals than in conventional journals. Why is that? If double-blind reviews are obviously beneficial to women, they would flock the double-blind journals… but they do not. Either women are misguided or else, more likely, double-blind reviews do not favor women.

    And, finally, there may be substantial benefits to the authors and the community in the reviewers knowing who they are. It is simply a fact that the identity of the authors is an important factor when assessing a piece of scientific work. Ultimately, we tend to reward sustained high quality work with more credibility. You want authors to have skin in the game: if their work is bad, then they should pay a price (more scrutiny of their work in the future) and when their work is consistently good, they should be rewarded (given more implicit trust).

  2. The price of lithium-ion batteries has fallen by 73% between 2010 and 2016. (Source: Bloomberg) However, it does not mean that these batteries are getting better at a similar rate: it seems that even though the price is falling, the physical quality of the batteries remains similar.
  3. Scientists have created viable human hair follicles from cultured human cells. (Source: Nature)
  4. There is interest in NAD supplements for anti-aging purposes. Xie et al. (2018) show cognitive benefits due to NAD supplements in old mice.
  5. Bernstein et al. (2018) point out that continuous and sustained social interactions reduce individual exploration. In other words, to be highly original, you do need to close your door and stop answering emails and phone calls for a time.

Fast Bounded Random Numbers on GPUs

We often use random numbers in software in applications such as simulations or machine learning. Fast random number generators tend to produce integers in [0,232) or [0,264). Yet many applications require integers in a given interval, say the interval {0,1,2,3,4,5,6,7}. Thus standard libraries frequently provide a way to request an integer within a range.

How does it work underneath? Typically, a 64-bit or 32-bit integer is produced, and then it is converted into an integer within the desired range. Unfortunately, as pointed out in details by Melissa O’Neill, doing so without introducing undue biases is slow. That is, it can take more time to convert the integer to the range than to produce the random integer in the first place!

In Fast Random Integer Generation in an Interval, we show how to drastically accelerate this computation compared to the standard libraries. The main trick is to avoid as much as possible the use of division instructions (since they are slower). Bernardt Duvenhage has a great talk on the application of this technique in Python. There is an illustrative benchmark on GitHub.

In the paper, there is a precautionary note about the applicability of this technique to GPUs. Indeed, there are substantial differences between general purposes 64-bit processors and common GPUs (32-bit). A reader, Norbert Juffa, reached out to me to point out that the note might be unwarranted. Juffa wrote a benchmark using the NVIDIA API (CUDA) to support his claim.

The fast function that avoids divisions as much as possible can be expressed using a few lines of C.

// returns value in [0,s)
uint64_t nearlydivisionless (uint64_t s)  {
    uint64_t x = random64 ();
   // compute ( x * s ) >> 64
    uint64_t h = __umul64hi (x, s); 
    uint64_t l = x * s;
    if (l < s) {
        uint64_t t = -s % s;
        while (l < t) {
            x = random64 ();
            h =__umul64hi (x, s);
            l = x * s;
        }
    }
    return h;
}

What Juffa does is to generate 10,000,000 integers in the interval [0,500,000] from Marsaglia’s KISS64 random number generator. On a Quadro P2000 Nvidia card, he shows that using an approach that minimizes the use of divisions is much faster.

OpenBSD-like 5 ms
Java-like 2.9 ms
Our approach 1.4 ms

I make Juffra’s code available.

Sorting strings properly is stupidly hard

Programming languages make it hard to sort arrays properly. Look at how JavaScript sorts arrays of integers:

> v = [1,3,2,10]
[ 1, 3, 2, 10 ]
> v.sort()
[ 1, 10, 2, 3 ]

You need a magical incantation to get the right result:

> v.sort((a,b)=>a-b)
[ 1, 2, 3, 10 ]

Though this bad default can create bugs, it is probably not the source of too many frustrations. However, sorting strings alphabetically is a real problem. Let us see how various languages do it.

JavaScript:

> var v = ["e", "a", "é","f"]
> v.sort()
[ 'a', 'e', 'f', 'é' ]
> v = ["a","b","A","B"]
> v.sort()
[ 'A', 'B', 'a', 'b' ]

Python:

>>> x=["e","a","é","f"]
>>> x.sort()
>>> x
['a', 'e', 'f', 'é']
>>> x=["a","A","b","B"]
>>> x.sort()
>>> x
['A', 'B', 'a', 'b']

Swift:

  1> var x = ["e","a","é","f"] 
  2> x.sorted()
$R0: [String] = 4 values {
  [0] = "a"
  [1] = "e"
  [2] = "f"
  [3] = "é"
  1>  var x = ["a","b","A","B"]
  2> x.sorted() 
$R1: [String] = 4 values {
  [0] = "A"
  [1] = "B"
  [2] = "a"
  [3] = "b"
}

As far as I can tell, by default, these languages apply crude code-point sorting. Human beings understand that the characters e, é, E, É, è, ê, and so forth, should be considered as the same letter (e) with accents. There are exceptions to this rule, but the default which consists in sorting accentuated characters after the letter ‘z’ is just not reasonable. The way case is handled is patently stupid. You might prefer A to come before a, or vice versa, but no human being would ever sort the letters as A,B,a,b or a,b,A,B.

There are standards for sorting strings, such as the Unicode Collation Algorithm.

To get a sensible default, programming languages force you to use complicated code. In JavaScript, it is burdensome but easy enough….

> v.sort(Intl.Collator().compare)
[ 'a', 'e', 'é', 'f' ]

However, I am not sure what the equivalent is in Python and Swift. It does not jump at me in the documentation of the respective standard librairies. I did not even look at it for other popular programming languages like Go, C++, and so forth.

It is unacceptably difficult to do the “right thing”. The net result is that many programmers do not sort strings properly. If you use a natural language with non-English characters, you see the effect in many applications. It looks bad.

Thankfully, most major software products get it right. Microsoft Office, Google Docs, Apple apps… all do the right thing. It creeps up in small budget applications. I have to use one in my daily life as an employee of a public university, and it annoys me.

We should do better.

Further reading:International Components for Unicode

Science and Technology links (December 15th 2018)

  1. Academic excellence is not a strong predictor of career excellence. There is weak correlation between grades and job performance. Grant reviews the evidence in details in his New York Times piece.When recruiting research assistants, I look at grades as the last indicator. I find that imagination, ambition, initiative, curiosity, drive, are far better predictors of someone who will do useful work with me. Of course, these characteristics are themselves correlated with high grades, but there is something to be said about a student who decides that a given course is a waste of time and that he works on a side project instead. Breakthroughs don’t happen in regular scheduled classes, they happen in side projects. We want people who complete the work they were assigned, but we also need people who can reflect critically on what is genuinely important. I don’t have any need for a smart automaton: I already have many computers.I have applied the same principle with my two sons: I do not overly stress the importance of good grades, encouraging them instead to pursue their own interests and to go beyond their classes.
  2. Our hearts do not regenerate. Thus a viable strategy might to transplant brand new hearts from pigs. This is much harder than it appears, however. But progress is being made. Researchers are now able to keep baboons alive for months with transplanted pig hearts. To achieve this good result, the scientists had to use an immunosuppressant drug to prevent unwanted growth in the pig’s heart. With some luck, some of us could benefit from transplanted heart pigs in the near future.
  3. Cataract is the most common cause of blindness. It can be “cured” by removing your natural lens and replacing them with artificial lenses called IOL (intraocular lenses). This therapy was invented in the 1940s, but it took 40 years before it became widespread in wealthy countries. It is still out of reach in many countries. Yet the cost of intraocular lenses is less than 10$ and the procedure is inexpensive (it costs less than 25$ in total in some countries). Even today, in many rich countries, access to this therapy is restricted. Finally, in 2017, a government agency in UK recommended that we stop rationing access to cataract surgery.
  4. Physically fit middle-age women are much less likely to develop dementia (e.g., Alzheimer’s).
  5. You might expect that research results published in more prestigious venues would also be more reliable. Brembs (2018) suggests it works the other way around:

    an accumulating body of evidence suggests the inverse: methodological quality and, consequently, reliability of published research works in several fields may be decreasing with increasing journal rank

    My own recommendation to colleagues and students has been that if peer-reviewed publications are warranted, then it is fine to target serious well-managed venues, irrespective of their “prestige”.

    It is hard enough to do solid research, if you also have to tune it so that it outcompetes other proposals in a competition for prestige, I fear that you may discourage good research practices. Scientists care too little about modesty, it is their downfall.

  6. Lomborg, a reknown economist, writes about climate change:

    Using the best individual and collectively peer-reviewed economic models, the total cost of Paris “ through slower GDP growth from higher energy costs “ will reach $1-2 trillion every year from 2030. (…) It’s so expensive because green energy isn’t ready to replace fossil fuels at scale. Nations are using expensive subsidies and other policies to force immature green technologies on consumers and businesses. We need to change course. The smart option, backed by economic science, is to adopt a technology-led policy. This means investing far more into green energy research and development. Rather than forcing the rollout of immature energy sources, we need to ensure that green energy can out-compete fossil fuels.

    I really like the term “technology-led policy”. If you want to change the world for the better, then making the good things cheap using technology and science is the golden path.

  7. About 60% of all scientists never lead a research project of their own which indicate that they always play a supporting role. In fields like astronomy, ecology and robotics, half of all researchers leave the field every five years, a consequence of the fact that there are many more aspiring scientists than there are good jobs. Though this sounds bad, but one must consider that the number of scientists doubles every 15 years. Thus even though the job prospects for scientists look poor in relative terms, we must consider that we never had so many gainfully employed scientists.
  8. The state of Louisiana is adopting digital drive licenses. Meanwhile, in Montreal, I still can’t take the subway without constantly recharging a stupid card.
  9. Lack of copper might lead to heart disease. Copper is found in shiitake mushrooms, oysters, dark chocolate, sesame seeds, cashew nuts, raw kale, beans and avocados.
  10. The diabetes drug Metformin is under study as an anti-aging drug. It is believed to be very safe yet Konopka et al. suggests that it may lower the benefits due to exercise.
  11. Over time, our bodies accumulate a small fraction of “senescent cells”. It is believed that these disfunctional cells contribute to the diseases of old age. For the last few years, researchers have been looking for senolytics, drugs that can kill senescent cells. It turns out that two antibiotics approved for medical use are potent senolytics.
  12. The first autonomous vehicule (the ancestor of the self-driving car) was built in 1961.
  13. Billions of dollars have been spent on clinical trials to try to cure Alzheimer’s, all in vain. Golde et al. propose that the problem might have to do with poor timing: we need to apply the therapy at the right time. Wadmam suggests that Alzheimer’s might spread like an infection.
  14. China is introducing far reaching penalties for researchers who commit scientific fraud:

    Chinese leaders have been increasingly focused on scientific misconduct, following ongoing reports of researchers there using fraudulent data, falsifying CVs and faking peer reviews. In May, the government announced sweeping reforms to improve research integrity. One of those was the creation of a national database of misconduct cases. Inclusion on the list could disqualify researchers from future funding or research positions, and might affect their ability to get jobs outside academia. (Source: Nature)

    We need to recognize that the scientific enterprise is fundamentally on an honor-based system. It is trivial to cheat in science. You can work hard to collect data, or make it up as you go. Except for the most extreme cases, the penalty for cheating is small because there is almost always plausible deniability.

Science and Technology links (December 8th 2018)

  1. The energy density of lithium-ion batteries doubled between 1995 and 2005 but only increased by about 15% between 2005 and 2015. It is estimated that there is relatively little further gains in energy density possible with lithium-ion batteries. However, our mobile devices typically consume far less power than they did only a few years ago while offering faster processing.
  2. In China, 78% of all research institutes focus on science and engineering, and only 12% focus on the humanities. A quarter of the top universities have a science and engineering focus.
  3. In the US, if I know your zip code, your gender and your birthdate, I can nearly uniquely identify you.
  4. In wealthy countries, happier people are more likely to have children.
  5. It is sometimes stated that beyond physical differences, the brains of men and women are identical. Rosenblatt (2016) disagrees: “Brains are indeed typically male or typically female.” Falk and Hermle (2018) further observe that the more that women have equal opportunities, the more they differ from men in their preferences. Zhang et al. (2018) have a related finding:

    On average, women show stronger preferences for mates with good earning capacity than men do, while men show stronger preferences for physically attractive mates than women do (…) we found little evidence that these sex differences were smaller in countries with greater gender equality.

  6. It seems that very large mammals co-existed with the dinosaurs.

Asking the right question is more important than getting the right answer

Schools train us to provide the right answers to predefined questions. Yet anyone with experience from the real world knows that, more often than not, the difficult part is to find the right question.

To make a remarkable contribution, you need to start by asking the right question. I will go further than this: the questions you are asking might define who you are.

What is a good question?

  • The great questions are tractable and fruitful. They lead you on a path of discovery. It is easy to ask how to cure cancer, but that’s not a good question because it does not help anyone do medical research.
  • Secret questions are the best: if you are the only one with this question in mind, then you may be holding a gold mine. Questions that everyone is having are proportionally worthless. (E.g., see Zero to One by Peter Thiel)

You may think that by studying hard, by learning all the answers, you will get better at asking great questions. I am not sure it works.

In fact, knowing too much can harm you. I would take a B student who has fresh questions as a Ph.D. student over a typical overeager A+ student who frets about getting everything right. It is a poorly held secret that some of the very best researchers and innovators were average students.

Do the following experiment. Pick a scholarly field, any field, then spend two weeks reading everything about it that you can. Next, write down 5 questions. I can almost guarantee you that these 5 questions will be already covered by sources you read. They will be “known” questions.

So to find good questions, you have to maintain some distance from the material. This should be uncontroversial if you consider that I define “good questions” to be “secret” or “highly original”.

Our minds tend to frame everything in terms of the patterns we have learned. Spend two years studying Marxism and every single problem will feel like a Marxist problem to you. It becomes difficult for you to come up with new questions outside of the frame.

Don’t get me wrong: smart people who know more tend to be more creative, everything else being equal… but there is a difference between being knowledgeable and having been locked into a frame of mind.

Yet here is how many researchers work. They survey the best papers from the last major conference or journal issue in their field. Importantly, they make sure to read what everyone is reading and to make sure to make theirs the frame of minds of the best people. They make sure that they can repeat the most popular questions and answers. They look at the papers, look for holes or possibilities for improvement and work from there. What this ensures that there are a few leaders (people writing about genuine novel ideas) followed by a long and nearly endless stream of “me too” papers that offer minor and inconsequential variations.

It is easier to judge these things in retrospect. In computer science, we had the XML craze at the turn of the century. Dozens of XML papers appeared each year at each of the top database conferences. I wrote about the untold story of the death of this idea. How could so many people get so excited at the same time by what was a dead-end?

I believe that people are happy to be handed out questions and will often rush out to provide highly sophisticated thorough answers… whether or not the question is the right one.

My claim is that the people leading are not unnaturally smart, knowledgeable or creative. The people who answer other people’s questions are not dumb or unimaginative. The main difference is one of focus. You either focus on asking good questions or you focus on providing good answers.

The world would be better if we had more people asking better questions.

How might we ask better questions?

  • Pay attention to what is around you and violates your worldview. How did Fleming discover penicillin? He noticed that some mold that had invaded his dirty lab appeared to kill bacteria. He asked the right question at that time.
  • Be patient. Reportedly, Einstein once stated, “It’s Not That I’m so Smart, It’s Just That I Stay with Problems Longer.” The longer you work on a problem, the more likely you are to find interesting questions. (See Forthmann et al. 2018) The easiest way to miss the great questions is to dismiss the problems as uninteresting and move on too quickly.
  • Be physically active, go for a walk. Chaining yourself to a desk is likely counterproductive. I used to think that being an all-out intellectual was the best route, but I now believe that I was grossly mistaken. I personally take a walk outside almost every morning on weekdays. (See Oppezzo and Schwartz, 2014).
  • Don’t be too social. Social pressure toward conformity trigger intense instinctive reactions. It is simply hard to go against the herd. Thus you are better off not know too much about where the herd is. In concrete terms, spend entirely days by yourself. Bernstein et al. (2018) recommend intermittent social interactions, as opposed to continuous interactions, to avoid a reduction in individual exploration.
  • Ask a lot of questions. If you want to become good at providing the right answers, train yourself to answer lots of questions. If you want to become good at asking questions, ask a lot of them.
  • Always question your own thoughts and work.

The scientific mind does not so much provide the right answers as ask the right questions. (attributed to Levi-Strauss)

Science and Technology links (December 1st 2018)

  1. Autism affects about 1% of the population and four times as many males as females.
  2. In older highly educated people, drinking 2 cups of coffee a day is associated with a reduced mortality rate of 22%. (This does not mean that drinking coffee makes you less likely to die, but it might.)
  3. Amazon, the e-commerce giant, is entering the chip-making business with its AWS Graviton processors, designed for cloud servers and based on an ARM architecture (like the processor in your phone). The initial reports are somewhat negative.
  4. Student assignments are graded automatically at the University of Cafornia Berkeley, and the result is that

    ratings for teaching effectiveness have reached their highest level ever in recent semesters.

    (Credit: S. Downes)

  5. A Chinese professor helped produced two genetically edited babies. The intention is that they be immune to HIV. Harvard’s professor George Church is supportive of this bold move.

Quickly sampling from two arrays (C++ edition)

Suppose that you are given two arrays. Maybe you have a list of cities from the USA and a list of cities from Europe. You want to generate a new list which mixes the two lists, taking a sample from one array (say 50%), and a sample from the other array (say 50%). So if you have 50 cities from the USA and 50 cities from Europe, you want a new array that contains, in random order, 25 cities from the USA and 25 cities from Europe.

We need this kind of mixed sampling all the time in machine learning or data science. This summer, I was running simulations and the bulk of the time was spent mixing arrays. I need to pick, say, 25% of all elements from one array and combine them with, say, 75% of all elements from another array.

There are many bad ways to solve this problem. But here is a reasonable one. First you pick a sample from the first array using reservoir sampling; then you pick a sample from the other array (again using reservoir sampling), and you finally apply a random shuffle to the result.

Reservoir sampling is an efficient way to sample N values from an array:

  for (i = 0; i < N; i++) {
    output[i] = source[i];
  }
  for (; i < ...; i++) {
    r = random_bounded(i);// value in [0,i)
    if (r < howmany) {
      output[r] = source[i];
    }
  }

Knuth shuffling is an efficient way to randomly shuffle the elements in an array:

  for (i = size; i > 1; i--) {
    r = random_bounded(i);// value in [0,i)
    swap(array[i-1], array[r]);
  }

With these two algorithms in place, I can sample from two source arrays using three function calls:

reservoirsampling(output, N1, source1, length1);
reservoirsampling(output + N1, N2, source2, length2);
shuffle(output, N1 + N2);

So how efficient is it? Suppose that I have two arrays made of a million elements each and I want to sample half a million elements from each. On my iMac, I use a bit over 12 CPU cycles per input element (so about 24 million cycles in total). You probably can go even faster, but this approach has the benefit of being both simple and efficient.

My source code is available.

Science and Technology links (November 24th 2018)

  1. There is no association between birth order and personality traits:

    The results of both within- and between-family research designs revealed no consistent evidence of a link between birth order and the personality traits of extraversion, neuroticism, agreeableness, conscientiousness, and openness.

  2. Most modern cultures use numbers based on a decimal system (base 10). However, in many European languages (e.g., French and Danish) the number 20 is used as a base (eighty in French is four-twenty). We call such system vigesimal. They are common in Africa. The Maya counted in base 20. I am told that the Gauls used a vigesimal system, but I could not find a credible supporting source (the Gauls also used Greek and Latin).
  3. Can you build an airplane with no moving part? It turns out that you can. Researchers built a model airplane that moves the air using an electric field. (credit: degski)
  4. Many of our everyday plastic items (like plastic bottles) contain a chemical called BPA. Our bodies can ingest it, but it is evacuated within hours. There has been an intense lobby to ban it in the spirit of the precautionary principle; it does affect mice (causing genetic mutations in offsprings) but there is no proof that it harms human beings. Should you buy goods that are said to be BPA-free? They are made with alternative chemicals, so the question is whether these alternative chemicals are safer. Horan et al. provide evidence that the alternatives can be harmful.
  5. It is expected that Japan will grow more dependent on coal in the coming decades, it is currently generating a third of its electricity using coal. Germany produces 40% of its electricity using coal.Coal’s popularity can be explained in large part by how nuclear power is failing us.
  6. We subsidize electric cars because we assume that they are more environmentally friendly. They certainly lack an exhaust pipe which is great for people around the car. Electric cars make it easy to “export” pollution: you can keep dense cities or even entire countries cleaner… But the batteries and their toxic chemicals must still end up somewhere.What is the larger picture? If you care only about climate change, then electric cars are slightly beneficial, as long as you are not producing your electricity from coal…

    When powered by average European electricity, electric vehicules are found to reduce global warming potential by 20% to 24% compared to gasoline and by 10% to 14% relative to diesel under the base case assumption of a 150,000 km vehicle lifetime. Electric vehicules powered by coal electricity are expected to cause an increase in global warming potential of 17% to 27% compared with diesel and gasoline ICEVs. Hawkins et al., 2012

    Yet if you care about other types of environmental impacts, electric cars may be less beneficial…

    the acidification, eutrophication, human toxicity, and particulate matter formation caused by Electric vehicles are higher than those caused by internal combustion engines .
    (Bicer et Dincer, 2018)

  7. Professors get to spend only a tiny minority of their time on their own research.. The more experience the professor, the less time they have for their own research. PhD students have a lot more time for research.I believe that it is a form of institutional aging.
  8. Older people (age 75 and older) are often out of shape, but it gets worse during hospitalization. Staying in a hospital bed for a long time is bad for you. It seems that the ill effects of hospitalization can be reverse with an exercise routine.
  9. There is no scientific evidence that depression is due to a chemical imbalance:

    Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a chemical imbalance of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public (…) the evidence for the chemical imbalance model is at best slim (…) There is no known optimal level of neurotransmitters in the brain, so it is unclear what would constitute an imbalance. Nor is there evidence for an optimal ratio among different neurotransmitter levels. Moreover, although serotonin reuptake inhibitors, such as fluoxetine (Prozac) and sertraline (Zoloft), appear to alleviate the symptoms of severe depression, there is evidence that at least one serotonin reuptake enhancer, namely tianepine (Stablon), is also efficacious for depression (Akiki, 2014). The fact that two efficacious classes of medications exert opposing effects on serotonin levels raises questions concerning a simplistic chemical imbalance model.

  10. Some Intel processors can run at up to 5 Ghz. That is, they run through 5 cycles per nanosecond. We are not likely to see much faster speeds in the near or medium term.
  11. After rising from about 300$US in mid-2016 to over 20,000$US in early 2018, the price of Bitcoin is currently (as I write this) 4337$US, and falling.

Science and Technology links (November 18th 2018)

  1. It seems that reducing your carbohydrate (sugar) intake might be a good way to lose weight:

    lowering dietary carbohydrate increased energy expenditure during weight loss maintenance. This metabolic effect may improve the success of obesity treatment, especially among those with high insulin secretion.

    I should warn that this study refers to “lowering sugar” not getting rid of it entirely.

  2. 85% of the more than $100bn a year spent on medical research globally is wasted avoidably
  3. Collison and Nielsen write:

    science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities

    While I accept their demonstration, it is not clear what (if anything in particular) is causing this lack of productivity.

    Collison and Nielsen fall short of offering a solution. Maybe we ought to reinvent discovery?

  4. A man is going to court so that he can be considered 20 years younger than what his birth date indicates.

Simple table size estimates and 128-bit numbers (Java Edition)

Suppose that you are given a table. You know the number of rows, as well as how many distinct value each column has. For example, you know that there are two genders (in this particular table). Maybe there are 73 distinct age values. For a concrete example, take the standard Adult data set which is made of 48842 rows.

How many distinct entries do you expect the table to have? That is, if you remove all duplicate rows, what is the number of rows left?

There is a standard formula for this problem: Cardenas’ formula. It uses the simplistic model where there is no relationship between the distinct columns. In practice, it will tend to overestimate the number of rows. However, despite it is simplicity, it often works really well.

Let p be the product of all column cardinalities, and let n be number of rows, then the Cardenas estimate is p * (1 – (1 – 1/p)n). Simple right?

You can implement in Java easily enough…

double cardenas64(long[] cards, int n) {
    double product = 1;
    for(int k = 0;  k < cards.length; k++) {
      product *= cards[k];
    }
    return product 
         * (1- Math.pow(1 - 1.0/product,n));
 }

So let us put in the numbers… my column cardinalities are 16,16,15,5,2,94,21648,92,42,7,9,2,6,73,119; and I have 48842 rows. So what is Cardenas’ prediction?

Zero.

At least, that’s what the Java function returns.

Why is that? The first problem is that 1 – 1/p is 1 when p is that large. And even if you could compute 1 – 1/p accurately enough, taking it to the power of 48842 is a problem.

So what do you do?

You can switch to something more accurate than double precision, that is quadruple precision (also called binary128). There is no native 128-bit floats in Java, but you can emulate them using the BigDecimal class. The code gets much uglier. Elegance aside, I assumed it would be a walk in the park, but I found that the implementation of the power function was numerically unstable, so I had to roll my own (from multiplications).

The core function looks like this…

double cardenas128(long[] cards, int n) {
    BigDecimal product = product(cards);
    BigDecimal oneover = BigDecimal.ONE.divide(product,
       MathContext.DECIMAL128);
    BigDecimal proba = BigDecimal.ONE.subtract(oneover,
    MathContext.DECIMAL128);
    proba = lemirepower(proba,n);
    return product.subtract(
       product.multiply(proba, MathContext.DECIMAL128),
        MathContext.DECIMAL128).doubleValue();
    }

It scales up to billions of rows and up to products of cardinalities that do not fit in any of Java’s native type. Though the computation involves fancy data types, it is probably more than fast enough for most applications.

My source code is available.

Update: You can avoid 128-bit numbers by using the log1p(x) and expm1(x) functions; they compute log(x + 1) and exp(x) – 1 in a numerically stable manner. The updated code is as follow:

 double cardenas64(long[] cards, int n) {
  double product = 1;
  for(int k = 0;  k < cards.length; k++) {
    product *= cards[k];
  }
  return product * 
    -Math.expm1(Math.log1p(-1.0/product) * n);
}

(Credit: Harold Aptroot)

Memory-level parallelism: Intel Skylake versus Apple A12/A12X

Modern processors execute instructions in parallel in many different ways: multi-core parallelism is just one of them. In particular, processor cores can have several outstanding memory access requests “in flight”. This is often described as “memory-level parallelism”. You can measure the level of memory-level parallelism your processors has by traversing an array randomly either by following one path, or by following several different “lanes”. We find that recent Intel processors have about “10 lanes” of memory-level parallelism.

It has been reported that Apple’s mobile processors are competitive (in raw power) with Intel processors. So a natural question is to ask whether Apple’s processors have more or less memory-level parallelism.

The kind of memory-level parallelism I am interested in has to do with out-of-cache memory accesses. Thus I use a 256MB block of memory. This is large enough not to fit into a processor cache. However, because it is so large, we are likely to suffer from a virtual-memory-related fault. This can significantly limit memory-level parallelism if the page sizes are too small. By default on the Linux distributions I use, the pages span 4kB (whether on 64-bit ARM or x64). Empirically, that is too small. Thankfully, it is easy to reconfigure the pages so that they span 2MB or more (“huge pages”). On Apple’s devices, whether it be an iPhone or an iPad Pro, I believe that the pages always span 16kB and that this cannot be easily reconfigured.

Before I continue, let me present the absolute timings (in second) using a single lane (thus no memory-level parallelism). Apple makes two version of its most recent processor, the A12 (in the iPhone) and the A12X (in the iPad Pro).

Intel skylake (4kB pages) 0.73 s
Intel skylake (2MB pages) 0.61 s
Apple A12 (16kB pages) 0.96 s
Apple A12X (16kB pages) 0.97 s
Apple A10X (16kB pages) 1.15 s

According to these numbers, the Intel server has the upside over the Apple mobile devices. But that’s only part of the story. What happens as you increase the number of lanes (while keeping the code single threaded) is interesting. As you increase the number of lanes, Apple processors start to beat the Intel Skylake in absolute, raw speed.

Another way to look at the problem is to measure the “speedup” due to the memory-level parallelism: we divide the time it takes to traverse the array using 1 lane by the time it takes to do so using X lane. We see that the Intel Skylake processor is limited to about a 10x or 11x speedup whereas the Apple processors go much higher.

Thoughts:

  1. I’d be very interested in knowing how Qualcomm and Samsung processors compare.
  2. It goes without saying that my server-class Skylake machine uses a lot more power than the iPhone.
  3. If I could increase the page size on iOS, we would get even better numbers for the Apple devices.
  4. The fact that the A12 has higher timings when using a single lane suggests that its memory subsystem has higher latency than a Skylake-based PC. Why is that? Could Apple just crank up the frequency of the DRAM memory and beat Intel throughout?
  5. Why is Intel limited to 10x memory-level parallelism? Why can’t they do what Apple does?

Credit: I owe much of the design of the experiment and C++ code to Travis Downs, with help from Nathan Kurz. The initial mobile app for Apple devices was provided by Benoît Maison, you can find it on GitHub along with the raw results and a “console” version that runs under macOS and Linux. I owe the A12X numbers to Stuart Carnie and the A12 numbers to Victor Stewart.

Further reading: Memory Latency Components

Science and Technology links (November 10th, 2018)

  1. It already takes more energy to operate Bitcoin than to mine actual gold. Cryptocurrencies are responsible for millions of tons of CO2 emissions. (Source: Nature)
  2. Half of countries have fertility rates below the replacement level, so if nothing happens the populations will decline in those countries” (source:BBC)
  3. According to Dickenson et al., 8.6% of us (7.0% of women and 10.3% of men) have difficulty controlling sexual urges and behaviors.
  4. A frequently prescribed drug family (statins) can increase your risk of suffering from ALS by a factor of 10 or 100.
  5. Countries were people are expected to live longest in 2040 are Spain, Japan, Singapore, Switzerland, Portugual, Italy, Israel, France, Luxembourgh, Australia. Not included in this list is the USA.
  6. Smart mirrors could monitor your mood, fitness, anxiety levels, heart rate, skin condition, and so forth.
  7. When you are trying to determine whether a drug is effective, it is tempting to look at published papers and see whether they all agree on the efficacity of the drug. This may be quite wrong: Turner et al. show a strong bias whereas negative results are never published.

    Studies viewed by the FDA as having negative or questionable results were, with 3 exceptions, either not published (22 studies) or published in a way that, in our opinion, conveyed a positive outcome (11 studies). According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive. Separate meta-analyses of the FDA and journal data sets showed that the increase in effect size ranged from 11 to 69% for individual drugs and was 32% overall.

    Simply put, it is far easier and profitable to publish positive results so that’s what you get.

    This means that, by default, you should always downgrade the optimism of the litterature.

    Simply put: don’t be too quick to believe what you read, even if it is comes in the form of a large set of peer-reviewed research papers.

  8. Richard Jones writes “Motivations for some of the most significant innovations weren’t economic“.
  9. Cable and satellite TV is going away.
  10. “What if what students really want is not to be learners, but alumni?” People will prefer an academically useless program from Harvard to a complete graduate program from a lowly school because they badly want to say that they went to Harvard.
  11. Drinking coffee abundantly protects from neurodegenerative diseases.

Measuring the memory-level parallelism of a system using a small C++ program?

Our processors can issue several memory requests at the same time. In a multicore processor, each core has an upper limit on the number of outstanding memory requests, which is reported to be 10 on recent Intel processors. In this sense, we would like to say that the level of memory-level parallelism of an Intel processor is 10.

To my knowledge, there is no portable tool to measure memory-level parallelism so I took fifteen minutes to throw together a C++ program. The idea is simple: we visit N random locations in a big array. We make sure that the processor cannot tell which location we will visit next before the previous location has been visited. There is a data dependency between memory accesses. We can break this memory dependency by dividing up the task between different “lanes”. Each lane is independent (a bit like a thread). The total number of data accesses is fixed. Up to some point, having more lane should speed things up due to memory-level parallelism. I used the term “lane” so that there is no confusion with “threads” and multicore processing: my code is entirely single-threaded.

  size_t howmanyhits_perlane 
         = howmanyhits / howmanylanes;
  for (size_t counter = 0; 
      counter < howmanyhits_perlane; counter++) {
    for (size_t i = 0; i < howmanylanes; i++) {
      size_t laneindexes = hash(lanesums[i] + i);
      lanesums[i] += bigarray[laneindexes];
    }
  }

Methodologically, I increase the number of lanes until adding one more benefits the overall speed by less than 5%. Why 5%? No particular reason: I needed a threshold of some kind. I suspect that I slightly underestimate the maximal amount of memory-level parallelism: it would take a finer analysis to make a more precise measure.

I run the test three times and check that it gives three times the same integer value. Here are my (preliminary) results:

Intel Haswell 7
Intel Skylake 9
ARM Cortex A57 5

My code is available.

On a multicore systems, there is more memory-level parallelism, so a multithreaded version of this test could deliver higher numbers.

Credit: The general idea was inspired by an email from Travis Downs, though I take all of the blame for how crude the implementation is.

Science and Technology links (November 3rd, 2018)

  1. Bitcoin, the cryptocurrency, could greatly accelerate climate change, should it succeed beyond its current speculative state.
  2. Crows can solve novel problems very quickly with tools they have never seen before.
  3. The new video game Red Dead Redemption 2 made $725 million in three days.
  4. Tesla, the electric car company, is outselling Mercedes Benz and BMW while making a profit.
  5. Three paralyzed men are able to walk again thanks to spinal implants (source: New York Times). There are nice pictures.
  6. Human beings live longer today than ever. In the developed world, between 1960 and 2010, life expectancy at birth went up by nearly 20 years. It consistently goes up by about 0.12 years per year. However, it is not yet clear how aging and death have evolved over time. Some believe that there is a “compression” effect: more and more of us reach a maximum, and then we suddenly all die at around the same age. This would be consistent with a hard limit on human lifespan and I think it is the scenario most biologists would expect. There is also the opposite model: while most of us die at around the same age, some lucky ones survive much longer. According to Zuo et al. (PNAS) both models are incorrect statistically. Instead, the curve is advancing as a wave front. This means that as far as death is concerned, being 68 today is much like being 65 a generation ago. This is surprising.

    (…) we find no support for an approaching limit to human lifespan. Nor do our results suggest that endowments, biological or other, are a principal determinant of old-age survival.

    Assuming that Zuo et al. are correct, I do not think we have a biological model at the ready to explain this statistical phenomenon.

  7. Suppose that you gave a cocktail of drugs approved for human consumption to worms. By how much do you think you could extend their lifespan? The answer is at least by a factor of two. They tried their best cocktails with fruit flies and showed benefits there as well. It is much harder to manipulate the lifespan of large mammals like human beings, but these results support the theory that drug cocktails could increase human lifespans. They may already being doing so.
  8. Amazon is hiring fewer workers, maybe because it is getting better at automation. (speculative) It seems that Amazon is mostly denying the story, hinting that they are still creating more and more jobs.
  9. No primate except for human beings, undergoes menopause. Very few animals have menopause: primarily some whales and human beings. I don’t think we know why menopause evolved.
  10. Total direct greenhouse gas emissions from U.S. livestock have declined 11.3 percent since 1961, while production of livestock meat has more than doubled.
  11. Male and female animals respond very differently to anti-aging strategies and they age very differently:

    One particularly odd thing in humans is that though women live longer, they are nonetheless more prone to miserable but non-deadly ailments such as arthritis (…) Lethal illnesses such as heart disease and cancer strike men more often. Although Alzheimer’s strikes women more than men, for unknown reasons.

    We do not know why there is such a sharp difference between males and females regarding health and longevity. However, some believe that the current historical fact that women live many years more than men is due to the fact that antibiotics disproportionally helped the health of women.

  12. Vegans more frequently suffer from bone fractures.
  13. Teaching by presenting worked examples seems to be most efficient. Students get the best grades with the least work.This appears self-evident to me. It is curious why worked examples are not more prevalent in teaching.
  14. A company called Grifols claims to have a drug that can measurably slow down the progression of Alzheimer’s. For context, we currently have no therapy to slow or reverse Alzheimer’s, so even a small positive effect would be a tremendous breakthrough. However, there has been many, many false news regarding Alzheimer’s and this report appears quite preliminary.

Science and Technology links (October 28th, 2018)

  1. If you take kids born in the 1980s, who do you think did better, the rich kids or the poor kids? The answer might surprise you:

    The children from the poorest families ended up twice as well-off as their parents when they became adults. The children from the poorest families had the largest absolute gains as well. Children raised in the top quintile did no better or worse than their parents once those children became adults.

  2. Some of our cells become senescent: they are disfunctional and create trouble. We believe that it contributes to age-related diseases. Fisetin is a drug (available a supplement) that kills senescent cells and extends (median and maximal) lifespan in mice. I do not recommend taking fisetin at this time, unless you are a mice.
  3. Vegetarians report lower self-esteem, lower psychological adjustment, less meaning in life, and more negative moods. I have no idea what to make of this, apparently robust, finding. I was a vegetarian in my 20s and I was also subject to depression. I would never think that I was depressive because I ate no meat.
  4. The sea rises at a rate of 3 mm per year. It has been rising for thousands of years. Taking into account the acceleration that we anticipate due to climate change, we can expect the sea to have risen by 65 cm in 2100. Does that mean that islands will go under? Maybe not: in a study, only 14% of islands exhibited a reduction in area whereas 43% increased in size.
  5. Most processors today, outside the tiny embedded ones, use a 64-bit architecture, which means that they can process data in chunks on 64 bits very quickly. This has all sorts of benefits. A 32-bit processor, for example, has trouble counting to 5 billion. It is difficult, if not impossible, for a 32-bit software application to use more than 4GB of memory. Microsoft still publishes Windows in two editions, the 32-bit edition and the 64-bit edition. The purpose of the 32-bit edition is to support legacy applications. The two major graphics card makers (AMD and NVIDIA) have now stopped producing drivers for 32-bit operating systems. Thus, at least as far as gaming is concerned, 32-bit Windows is dying. Microsoft has promoted a 64-bit Windows by default on new computers since at least 2009.
  6. It seems that 70% of the American soldiers are “overweight”. I find it hard to believe that 60% of all American marines are overweight. Because this was determined using the body-mass-index approach, it is also possible that American soldiers are simply very muscular. Yet another statistics tells us that nearly 40% of all soldiers have a chronic medical condition and 8.6% take sleeping pills. So maybe American soldiers are not as fit as I would expect.
  7. It is often believed that men who have more testosterone have an easier time building muscle mass. It turns out that this is false, the amount of testosterone is not relevant in healthy young men.
  8. In the USA, health care costs are predicted to continue to grow at a rate of over 4%. The economy as a whole is predicted to grow at a rate between 1.4% and 2% a year on the long term. The net result is a gap of about 2% a year. If sustained over many decades, this gap would lead to the bulk of the American economy invested in health spending. People who are 65-year old or older account for a third of all health spending while young female (19 to 44) spend twice as much as their male counterparts.
  9. Cheese and yogurt are correlated with fewer cardiovascular diseases.
  10. The Haruhi Problem seeks the smallest string containing all permutations of a set of n elements. The first known solution to this problem was published anynomously on an anime posting board. A formal analysis is being written up.
  11. Cardiorespiratory fitness is associated with longevity:

    In this cohort study of 122,007 consecutive patients undergoing exercise treadmill testing, cardiorespiratory fitness was inversely associated with all-cause mortality without an observed upper limit of benefit. Extreme cardiorespiratory fitness (≥2 SDs above the mean for age and sex) was associated with the lowest risk-adjusted all-cause mortality compared with all other performance groups.

Is WebAssembly faster than JavaScript?

Most programs running on web sites are written in JavaScript. There are still a few Java applets and other plugins hanging around, but they are considered obsolete at this point.

While JavaScript is superbly fast, some people feel that we ought to do better. That’s where WebAssembly comes in. It is a binary (“pre-compiled”) format that is made to load quickly. It still needs to get compiled or interpreted, but, at least, you do not need to parse JavaScript source code.

The general idea is that you write your code in C, C++ or Rust, then you compile it to WebAssembly. In this manner, you can port existing C or C++ programs so that they run on Web pages. That’s obviously useful if you already have the C and C++ code, but less appealing if you are starting a new project from scratch. It is far easier to find JavaScript front-end developers in almost any industry, except maybe gaming.

You should not expect WebAssembly to have native performance. That is, WebAssembly is, at this time, no match for a good old C program.

I think it is almost surely going to be more labor intensive to program web applications using WebAssembly.

In any case, I like speed so I was interested so I asked a student of mine (M. Fall) to work on the problem. We picked small problems with hand-crafted code in C and JavaScript.

Here are the preliminary conclusions:

  1. In all cases we considered, the total WebAssembly files were larger than the corresponding JavaScript source code, even without taking into account that the JavaScript source code can be served in compressed form. This means that if you are on a slow network connection, JavaScript programs will start faster.The story may change if you build large projects. Moreover, we compared against human-written JavaScript, and not automatically generated JavaScript.
  2. Once the WebAssembly files are in the cache of the browser, they load faster than the corresponding JavaScript source code, but the difference is small. Thus if you are frequently using the same application, or if the web application resides on your machine, WebAssembly will start faster. However, the gain is small. One reason why the gain is small is that JavaScript loads and starts very quickly.
  3. WebAssembly (compiled with full optimization) is not always faster than JavaScript during execution, and when WebAssembly is faster, the gain can be small. Browser support is also problematic: while Firefox and Chrome have relatively fast WebAssembly execution (with Firefox being better), we found Microsoft Edge to be quite terrible. WebAssembly on Edge is really slow.Our preliminary results contradict several reports, so you should take them with a grain of salt. However, benchmarking is ridiculously hard especially when a language like JavaScript is involved. Thus anyone reporting systematically better results with WebAssembly should look into how well optimized the JavaScript really is.

While WebAssembly might be a compelling platform if you have a C++ game you need to port to the Web, I would bet good money that WebAssembly is not about to replace JavaScript for most tasks. Simply put, JavaScript is fast and convenient. It is going to be quite difficult to do better in the short run.

It is still deserving of attention since the uptake on WebAssembly has been fantastic. For online games, it has surely a bright future.

More content: WebAssembly and the Death of JavaScript (video) by Colin Eberhardt

Further reading: Egorov’s Maybe you don’t need Rust and WASM to speed up your JS; Haas et al., Bringing the Web up to Speed with WebAssembly; Herrera et al., WebAssembly and JavaScript Challenge: Numerical program performance using modern browser technologies and devices.

Science and Technology links (October 20th, 2018)

  1. Should we stop eating meat to combat climate change? Maybe not. White and Hall worked out what happened if the US stopped using farm animals:

    The modeled system without animals only reduced total US greenhouse gas emissions by 2.6 percentage units. Compared with systems with animals, diets formulated for the US population in the plants-only systems resulted in a greater number of deficiencies in essential nutrients. (source: PNAS)

    Of concern when considering farm animals are methane emissions. Methane is a potent greenhouse gas, with the caveat that it is short-lived in the atmosphere unlike CO2. Should we be worried about methane despite its short life? According to the American EPA (Environmental Protection Agency), total methane emissions have been falling consistently for the last 20 years. That should not surprise us: greenhouse gas emissions in most developed countries (including the US) have peaked some time ago. Not emissions per capita, but total emissions.

    So beef, at least in the US, is not a major contributor to climate change. But we could do even better. Several studies like Stanley et al. report that well managed grazing can lead to carbon sequestration in the grassland. Farming in general could be more environmentally effective.

    Of course, if people consume less they will have a smaller environmental footprint, but going vegan does not imply that one consumes less. If you save in meat but reinvest in exotic fruits and trips to foreign locations, you could keep your environmental footprint the same.

    There are certainly countries were animal grazing is an environmental disaster. Many industries throughout the world are a disaster and we should definitively put pressure on the guilty parties. But, in case you were wondering, if you live in a country like Canada, McDonald’s is not only serving only locally-produced beef, but they also require that it be produced in a sustainable manner.

    In any case, there are good reasons to stop eating meat, but in the developed countries like the US and Canada, climate change seems like a bogus one.

    There also good reasons to keep farm animals. For example, it is difficult to raise an infant without cow milk and in most countries, it is illegal to sell human milk. Several parents have effectively killed their children by trying to raise them vegan (1, 2). It is relatively easy to match protein and calories with a vegan diet, but meat and milk are nutrient-dense food: it requires some expertise to do away with them.

    Further reading: No, giving up burgers won’t actually save the planet (New York Post).

    (Special thanks to professor Leroy for providing many useful pointers.)

  2. News agencies reported this week that climate change could bring back the plague and the black death that wiped out Europe. The widely reported prediction was made by Professor Peter Frankopan while at the Cheltenham Literary Festival. Frankopan is a history professor at Oxford.
  3. There is a reverse correlation between funding and scientific output, meaning that beyond a certain point, you start getting less science for your dollars.

    prestigious institutions had on average 65% higher grant application success rates and 50% larger award sizes, whereas less-prestigious institutions produced 65% more publications and had a 35% higher citation impact per dollar of funding. These findings suggest that implicit biases and social prestige mechanisms (…) have a powerful impact on where (…) grant dollars go and the net return on taxpayers investments.

    It is well documented that there is diminishing returns in research funding. Concentrating your research dollars into too few individuals is wasteful. My own explanation for this phenomenon is that, Elon Musk aside, we have all have cognitive bottlenecks. One researcher might carry fruitfully two, three major projects at the same time, but once they supervise too many students and assistants, they become a “negative manager”, meaning that make other researchers no more productive and often less productive. They spend less and less time optimizing the tools and instruments.

    If you talk with graduate students who work in lavishly funded laboratories, you will often hear (when the door is closed) about how poorly managed the projects are. People are forced into stupid directions, they do boring and useless work to satisfy project objectives that no longer make sense. Currently, “success” is often defined by how quickly you can acquire and spend money.

    But how do you optimally distribute research dollars? It is tricky because, almost by definition, almost all research is worthless. You are mining for rare events. So it is akin to venture capital investing. You want to invest into many start ups that have a high potential.

  4. A Nature columns tries to define what makes a good PhD student:

    the key attributes needed to produce a worthy PhD thesis are a readiness to accept failure; resilience; persistence; the ability to troubleshoot; dedication; independence; and a willingness to commit to very hard work — together with curiosity and a passion for research. The two most common causes of hardship in PhD students are an inability to accept failure and choosing this career path for the prestige, rather than out of any real interest in research.