Iterating over set bits quickly

A common problem in my line of work is to iterate over the set bits (bits having value 1) in a large array.

My standard approach involves a “counting trailing zeroes” function. Given an integer, this function counts how many consecutive bits are zero starting from the less significant bits. Any odd integer has no “trailing zero”. Any even integer has at least one “trailing zero”, and so forth. Many compilers such as LLVM’s clang and GNU GCC have an intrinsic called __builtin_ctzl for this purpose. There are equivalent standard functions in Java (numberOfTrailingZeros), Go and so forth. There is a whole Wikipedia page dedicated to these functions, but recent x64 processors have a fast dedicated instruction so the implementation is taken care at the processor level.

The following function will call the function “callback” with the index of each set bit:

uint64_t bitset;
for (size_t k = 0; k < bitmapsize; ++k) {
    bitset = bitmap[k];
    while (bitset != 0) {
      uint64_t t = bitset & -bitset;
      int r = __builtin_ctzl(bitset);
      callback(k * 64 + r);
      bitset ^= t;
    }
}

The trick is that bitset & -bitset returns an integer having just the least significant bit of bitset turned on, all other bits are off. With this observation, you should be able to figure out why the routine work.

Note that your compiler can probably optimize bitset & -bitset to a single instruction on x64 processors. Java has an equivalent function called lowestOneBit.

If you are in a rush, that’s probably not how you’d program it. You would probably iterate through all bits, in this manner:

uint64_t bitset;
for (size_t k = 0; k < bitmapsize; ++k) {
    bitset = bitmap[k];
    size_t p = k * 64;
    while (bitset != 0) {
      if (bitset & 0x1) {
        callback(p);
      }
      bitset >>= 1;
      p += 1;
    }
}

Which is faster?

Obviously, you have to make sure that your code compiles down to the fast x64 trailing-zero instruction. If you do, then the trailing-zero approach is much faster.

I designed a benchmark where the callback function just adds the indexes together. The speed per decoded index will depend on the density (fraction of set bits). I ran my benchmark on Skylake processor:

densitytrailing-zeronaive
0.125~5 cycles per int40 cycles per int
0.25~3.5 cycles per int30 cycles per int
0.5~2.6 cycles per int23 cycles per int

My code is available.

Thus using a fast trailing-zero function is about ten times faster.

Credit: This post was inspired by Wojciech Muła.

Science and Technology links (February 16th, 2018)

  1. In all countries, in all years–without exception–girls did better than boys in academic performance (PISA) tests.
  2. Vinod Khosla said:

    There are, perhaps, a few hundred sensors in the typical car and none in the body. A single ad shown to you on Facebook has way more computing power applied to it than a $10,000 medical decision you have to make.

  3. The gender of an unknown user can be identified with an accuracy of over 95% using the way a user is typing.
  4. Citation counts work better than a random baseline (by a margin of 10%) in distinguishing important seminal research papers.
  5. By consuming vitamin B3 (or niacin), you can increase your body’s production of Nicotinamide Adenine Dinucleotide (NAD+ for short). It turns out that NAD+ supplementation normalizes key Alzheimer’s features (in mice). If I suffered from Alzheimer’s, I would not hesitate to take niacin supplements.
  6. The U.S. has never produced so much oil.
  7. According to Nature, birds living in large groups are smarter.
  8. A few years ago, we were told that the Pacific nation of Tuvalu would soon disappear due to climate change. In fact, for now, it is growing in size.

Science and Technology links (February 9th, 2018)

  1. We shed 50 million skin cells every day.
  2. A mutant crayfish reproduces by cloning. To my knowledge, this might be the largest animal to reproduce by cloning.

    Before about 25 years ago, the species simply did not exist (…) it has spread across much of Europe and gained a toehold on other continents. In Madagascar, where it arrived about 2007, it now numbers in the millions (…)

    I note two interesting aspects to this story. The first one is that it shows that, contrary to a common belief, new species are created even today. The second one is that it brings us back to an interesting puzzle. Cloning is a lot more efficient that sex, for procreation. So why do most large animals use sex? See The Red Queen: Sex and the Evolution of Human Nature.

  3. Some evidence that moderate (but not heavy) alcohol consumption might be good for your brain. I still would not recommend you start drinking if you aren’t drinking right now.
  4. While the average is 106 boys born to every 100 girls, for vegetarian mothers the ratio is just 85 boys to 100 girls. In other words, being a vegetarian makes it much more likely that you will give birth to girls.
  5. Researchers can simulate a worm’s brain with a few artificial neurons.
  6. Elon Musk’s SpaceX company launched the most powerful rocket in the world:

    The Falcon Heavy is the world’s 4th highest capacity rocket ever to be built (…) Falcon Heavy was designed from the outset to carry humans into space, including the Moon and Mars (…) The Falcon Heavy was developed with private capital with Musk stating that the cost was more than $500 million. No government financing was provided for its development.

    The Verge has a nice documentary on YouTube.

  7. Mitochondria are the “power stations” of our cells. As we age, we tend to accumulate malfunctioning mitochondria which might lead to various medical conditions. Researchers have found that a drug targetting mitochondria could improve cognition in old mice.
  8. Graphics processors are in high demand. Some of the best ones are made by NVIDIA. Year-over-year, NVIDIA’s full-year revenue increased 41% to finish at $9.71 billion in 2017.
  9. Using lasers, we found whole new Mayan settlements:

    The data reveals that the area was three or four times more densely populated than originally thought. “I mean, we’re talking about millions of people, conservatively,” says Garrison. “Probably more than 10 million people.”

  10. According to a recent research article, vitamin D-3 has the potential to significantly reverse the damage that high blood pressure, diabetes, atherosclerosis, and other diseases inflict on the cardiovascular system.
  11. A vaccine entirely cleared mice from cancer.

Don’t underestimate the nerds

I’m a little nerdy. According to my wife, I even look like a nerd. I am not very big. I have a long resume posted online, and I’ll proudly post my follower count, but if you meet me in person, I am unlikely to come across as “impressive”. I don’t talk using “big words”. I have been told that I lack “vision”. Given a choice between spending time with powerful people getting their attention, and reading a science article… I will always go for the latter.

I’m not at all modest, but I am humble. I get most things wrong, and I will gladly advertise my failures.

I’m lucky in that I have a few like-minded colleagues. I have a colleague, let us call her “Hass”. She gave us a talk about power laws. (The mathematical kind.) Who spends their lunchtime talking about power laws and probabilistic distributions?

We do.

However, if you have been deep down in the bowels of academia… You will find another animal. You have “political professors” whose main game is to achieve a high status in the most visible manner. Academia rewards this kind of behavior. If you can convince others that you are important, well regarded and that you do great work for humanity, you will receive lavish support. It makes sense given the business schools are into: delivering prestige.

If you visit a campus, you might be surprised at how often computing labs are empty, no professor to be found. Because of who I am, I would never ask for space unless I really needed it. But, see, that’s not how political animals think… to them, having space is a matter of status.

Nerds are, at best, part-time political animals. It would seem that nerds are weak. Are they?

My view is that nerds are almost a different species. Or, at least, a subspecies. They do signal strength, but not by having a luxurious car, a big house, a big office, a big title.

I remember meeting with the CEO of a company that was doing well. The CEO kept signaling to me. He talked endlessly about his prestigious new car. He was sharply dressed in what was obviously a very expensive suit. He kept telling me about how many millions they were making. Yet we were in my small office, in a state university. He kept on signaling… and you know how I felt in the end? Maybe he expected me to feel inferior to him. Yet I lost interest in anything he had to tell me. He wanted me to review some technology for them, but I discouraged him.

Big titles, displays of money… those do not impress me. If you signal strength through money alone, I’m more likely to pity you.

If Linus Torvalds were to meet Bill Gates, you think that Linus would be under Bill in the nerdom hierarchy? I doubt it. I have no idea how much money Linus has, and the fact that nobody cares should be a clue.

What did my colleague Hass do? She came and presented a kick-ass nerdy presentation. The kind of stuff you cannot make up if you don’t know what you are talking about. She displayed strength, strength that I recognize. I think everyone in the room saw it. Yet she did not wear expensive clothes and she did not advertise big titles.

My wife recently taught me how to recognize signaling between cats. You could live all your life with cats and never realize how they broadcast signals and strength.

It is a mistake to think that the introverted nerds are weak. This is a very common mistake. I once bought a well-rated book on introverts, written by an extrovert. The whole book was about how introverts should face their fears. The author clearly thought that we were weak, in need of help somehow.

You are making a mistake if you think that my colleague Hass is weak. She could kick your nerd ass anytime.

Science and Technology links (February 2nd, 2018)

  1. Most mammals, including human beings, age according to a Gompertz curve. It is a fancy way of saying that your risk of death goes up exponential with age. Naked mole rats are mammals that do not age, in the following sense:

    unlike all other mammals studied to date, and regardless of sex or breeding-status, the age-specific hazard of mortality did not increase with age, even at ages 25-fold past their time to reproductive maturity

  2. It seems that the brain of male infants differs from that of female infants:

    We show that brain volumes undergo age-related changes during the first month of life, with the corresponding patterns of regional asymmetry and sexual dimorphism. Specifically, males have larger total brain volume and volumes differ by sex in regionally specific brain regions, after correcting for total brain volume.

  3. The American National Institutes of Health are launching a major research program in genome editing ($190 million over six years).
  4. It appears that many of us are deficient in magnesium, and that’s an important driver for cardiovascular diseases. Most of us will die of a cardiovascular disease (given current medical knowledge).

Picking distinct numbers at random: benchmarking a brilliant algorithm (JavaScript edition)

Suppose you want to choose m distinct integers at random within some interval ([0,n)). How would you do it quickly?

I have a blog post on this topic dating back to 2013. This week I came across Adrian Colyer’s article where he presents a very elegant algorithm to solve this problem, attributed to Floyd by Bentley. The algorithm was presented in an article entitled “A sample of brilliance” in 1987.

Adrian benchmarks the brilliant algorithm and finds it to be very fast. I decided the revisit Adrian’s work. Like Adrian, I used JavaScript.

The simplest piece of code to solve this problem is a single loop…

let s = new Set();
while(s.size < m) {
      s.add(randInt(n));
}

The algorithm is “non-deterministic” in the sense that you will generally loop more than m times to select m distinct integers.

The brilliant algorithm is slightly more complicated, but it always loops exactly m times:

let s = new Set();
for (let j = n - m; j < n; j++) {
        const t = randInt(j);
        s.add( s.has(t) ? j : t );
}

It may seem mysterious, but it is actually an intuitive algorithm, as Adrian explains in his original article.

It seems like the second algorithm is much better and should be faster. But how much better is it?

Before I present you my results, let me port over to JavaScript my 2013 algorithm. Firstly, we introduce a function that can generate the answer using a bitset instead of a generic JavaScript Set.

function sampleBitmap(m, n) {
   var s = new FastBitSet();
   var cardinality = 0
   while(cardinality < m) {
      cardinality += s.checkedAdd(randInt(n));
   }
   return s
}

Bitsets are can be much faster than generic sets, see my post JavaScript and fast data structures.

Secondly, consider the fact that when you need to generate more than m = n/2 integers in the range [0,n), you can, instead, generate m – n integers, and then negate the result:

function negate(s, n) {
  var news = new FastBitSet()
  let i = 0
  s.forEach(j => {while(i<j) {
             news.add(i);
             i++}; 
             i = j+1})
  while(i<n) {news.add(i);i++}
  return news
}

My complete algorithm is as follows:

function fastsampleS(m, n) {
    if(m > n / 2 ) {
      let negatedanswer = fastsampleS(n-m, n)
      return negate(negatedanswer)
    }
    if(m * 1024 > n) {
      return sampleBitmap(m, n)
    }
    return sampleS(m, n)
}

So we have three algorithms, a naive algorithm, a brilliant algorithm, and my own (fast) version. How do they compare?

mnnaivebrilliantmy algo
10,0001,000,0001,200 ops/sec1,000 ops/sec4,000 ops/sec
100,0001,000,00096 ops/sec80 ops/sec700 ops/sec
500,0001,000,00014 ops/sec14 ops/sec120 ops/sec
750,0001,000,0006 ops/sec8 ops/sec80 ops/sec
1,000,0001,000,0000.4 ops/sec5 ops/sec200 ops/sec

So the brilliant algorithm does not fare better than the naive algorithm (in my tests), except when you need to select more than half of the values in the interval. However, in that case, you should probably optimize the problem by selecting the values you do not want to pick.

My fast bitset-based algorithm is about an order of magnitude faster. It relies on the FastBitSet.js library.

My complete source code is available.

Science and Technology links (January 26th, 2018)

  1. We have reached “peak coal” meaning that coal usage is going to diminish in the coming years.
  2. McGill professor Ishiang Shih has been accused by the US government of leaking chip designs to the Chinese government. The professor runs a business called JYS Technologies. This sounds impressive and mysterious until you check out the company web site and its headquarters. If professor Ishiang Shih is a spy, he is clearly not in the James-Bond league.

    Anyhow, I thought it was interesting that the US would worry about China having access to chip designs. China is the world’s largest consumer of computer chips, but it still produces few of them, relying on imports instead. Obviously, the Chinese government would like to start making its own chips. Soon.

  3. Though we can increase networking bandwidth, we are unlikely to improve network latency in the future, because nothing can go faster than the speed of light. This means that we are hitting the physical limits of how quickly web sites can acknowledge your requests.

    (…) current network (Internet) latencies are here to stay, because they are already within a fairly small factor of what is possible under known physics, and getting much closer to that limit – say, another 2x gain – requires heroics of civil and network engineering as well as massive capital expenditures that are very unlikely to be used for general internet links in the foreseeable future.

    This would have impressed Einstein, I think.

  4. Men differ from women in that they are much more diverse:

    Human studies of intrasex variability have shown that males are intellectually more variable. Here we have performed retrospective statistical analysis of human intrasex variability in several different properties and performances that are unrelated or indirectly related to intelligence: (a) birth weights of nearly 48,000 babies (Medical Birth Registry of Norway); (b) adult weight, height, body mass index and blood parameters of more than 2,700 adults aged 18–90 (NORIP); (c) physical performance in the 60 meter dash event of 575 junior high school students; and (d) psychological performance reflected by the results of more than 222,000 undergraduate university examination grades (LIST). For all characteristics, the data were analyzed using cumulative distribution functions and the resultant intrasex variability for males was compared with that for females. The principal finding is that human intrasex variability is significantly higher in males, and consequently constitutes a fundamental sex difference.

    If you take this result to its logical conclusion, you realize that whether you look at top performers or worst performers, you will find more men than women, assuming that the average performance is the same. Biology takes more risks with men than with women.

  5. Scholars who believe nurture trumps nature also tend to doubt the scientific method. I am not sure what to make of this observation.
  6. Curcumin is a yellow-ish chemical found in the turmeric spice (commonly used in Indian cuisine). It has long been reported to have anti-inflammatory properties. It seems to be good against arthritis (according to the guy who renovated my kitchen) and there are reports that people who eat turmeric-rich Indian food have fewer cancers. To my knowledge, the evidence for the benefits of curcumin remain somewhat anecdotal which suggests that the beneficial effect, if any, is small. To make matters worse, curcumin is not very bio-available, meaning that you’d need to eat truck loads of turmeric to get a lot of curcumin in your cells. Some clever folks have commercialized more bio-available (and typically much more expensive) forms of curcumin. You can buy some on Amazon (it is not cheap, will cost you about $2 a day). We can hope that they would have greater effects. A paper in the American Journal of Geriatric Psychiatry reports that taking bio-available curcumin improves cognition in adults. Presumably, it reduces brain inflammation. (credit: David Nadeau)

    I expect that the effect, if real, is small. Still, it is also probably safe enough.

  7. Chinese scientists have successfully cloned two monkeys through somatic cell nuclear transfer. When I asked my wife why it was such a big deal, she pointed out that it suggested human cloning. Indeed, if you can clone monkeys, why not clone human beings?
  8. China has overtaken the United States in terms of the total number of science publications, according to statistics compiled by the US National Science Foundation (NSF). If you find this interesting, you might want to read my post China is catching to the USA, while Japan is being left behind.
  9. Intel (and AMD) processors have instructions to compute the sine and the cosine. It is surprisingly inaccurate:

    The worst-case error for the fsin instruction for small inputs is actually about 1.37 quintillion units in the last place, leaving fewer than four bits correct. For huge inputs it can be much worse, but I’m going to ignore that. (…) It is surprising that fsin is so inaccurate. I could perhaps forgive it for being inaccurate for extremely large inputs (which it is) but it is hard to forgive it for being so inaccurate on pi which is, ultimately, a very ‘normal’ input to the sin() function. with for decades.

Initializing arrays quickly in Swift: be wary of Sadun’s initializers

Swift is Apple’s go-to programming language. It is the new default to build applications for iPhones. It also runs well on Linux.

It is not as low-level as C or C++, but it has characteristics that I like. For example, it does not use a “fancy” garbage collector, relying instead on deterministic reference counting. It is also a compiled language. It also benefits from a clean syntax.

Suppose you want to initialize an array in Swift with the values 0, 100, 200… Let us pick a sizeable array (containing 1000 elements). The fastest way to initialize the array in my tests is as follows:

Array((0..<1000).lazy.map { 100 * $0 })

The “lazy” call is important for performance… I suspect that without it, some kind of container is created with the desired values, and then it gets copied back to the Array.

One of the worse approaches, from a performance point of view, is to repeatedly append elements to the Array:

var b = [Int]()
for i in stride(from: 0, to: maxval, by: skip) {
     b.append(i)
}

It is more than 5 times slower! Something similar is true with vectors in C++. In effect, constructing an array by repeatedly adding elements to it is not ideal performance-wise.

One nice thing about Swift is that it is extensible. So while Arrays can be initialized by sequences, as in my code example… they cannot be initialized by “generators” by default (a generator is a function with a state that you can call repeatedly)… we can fix that in a few lines of code.

Erica Sadun proposed to extend Swift arrays so that they can be initialized by generators… Her code is elegant:

public extension Array {
  public init(count: Int, generator: @escaping() -> Element) {
    precondition(count >= 0, "arrays must have non-negative size")
    self.init(AnyIterator(generator).prefix(count))
  }
}

I can use Erica Sadun’s initializer to solve my little problem:

var runningTotal : Int = 0
let b = Array(count: 1000) {() -> Int in
           runningTotal += 100
           return runningTotal
}

How fast is Erica’s initializer?

$ swift build    --configuration release && ./.build/release/arrayinit
append                           6.091  ns
lazymap                          1.097  ns
Erica Sadun                      167.311  ns

So over 100 times slower. Not good.

Performance-wise, Swift is a rather opinionated language: it really wants you to initialize arrays from sequences.

We can fix Erica’s implementation to get back to the best performance:

public extension Array {
  public init(count: Int, generator: @escaping() -> Element) {
    precondition(count >= 0, "arrays must have non-negative size")
    self.init((0..<count).lazy.map { Element in generator() })
  }
}

My source code is available.

Microbenchmarking is hard: virtual machine edition

To better understand software performance, we often use small controlled experiments called microbenchmarks. In an earlier post, I remarked that it is hard to reason from a Java benchmark. This brought me some criticism from Aleksey Shipilëv who is one of the top experts on Java benchmarking. I still stand by my belief and simply promised Aleksey to, one day, argue with him over a beer.

In a follow-up post, I insisted that microbenchmarks should be relying on very tightly controlled conditions, and I recommended avoiding just-in-time compilers if possible (such as is standard in Java). Indeed, you want your microbenchmarks to be as deterministic as possible (it should always be the same) yet just-in-time compilers are, almost by definition, non-deterministic. There is no reason to believe that your Java code will always be executed in the same manner from run to run. I also advocate avoiding memory allocation (and certainly garbage collection).

I am basing my opinion on practice. When developing software, I have often found it frustratingly difficult to determine whether a change would impact performance positively or negatively when using a language like Java or JavaScript, but much easier when using a more deterministic language like Go, Swift, C or C++.

Laurence Tratt shared with me his paper “Virtual Machine Warmup Blows Hot and Cold” (presented at OOPSLA last year). I believe that it is remarkable paper, very well written. Tratt’s paper is concerned with microbenchmarks written for languages with a virtual machine, like Java, JavaScript, Python (PyPy), Ruby, Scala and so forth. Note that they use machines configured for testing and not any random laptop.

Here are some interesting quotes from the paper:

in almost half of cases, the same benchmark on the same VM on the same machine has more than one performance characteristic

However, in many cases (…) neither JIT compilation, nor garbage collection, are sufficient to explain odd behaviours (…) we have very little idea of a likely cause of the unexpected behaviour.

It is clear that many benchmarks take considerable time to reach a steady state; that different process executions of the same benchmark reach a steady state at different points; and that some process executions do not ever reach a steady state.

What should we do if P contains a no steady state? (…) no meaningful comparison can be made.

We suggest that in many cases a reasonable compromise might be to use smaller numbers (e.g. 500) of in-process iterations most of the time, while occasionally using larger numbers (e.g. 1500) to see if longer-term stability has been affected.

My thoughts on their excellent work:

  1. Their observation that many benchmarks never reach a steady state is troubling. The implicit assumption in many benchmarks is that you have some true performance, and they have noise. Many times, it is assumed that the noise is normally distributed. So, for example, you may rarely hit a performance that is much higher or much lower than the true (average) performance. That’s, of course, not how it works. If you plot timings, you rarely find a normal distribution. But Tratt’s paper puts into question the concept of a performance distribution itself… it says that performance may evolve, and keep on evolving. Furthermore, it hints at the fact that it might be difficult to determine whether your benchmark has gone to a true steady state.
  2. They recommend running more benchmarks, meaning that quantity as a quality of its own. I agree with them. The counterpart to this that they do not fully address is that benchmarking has to be easy if it is to be plentiful. It is not easy to write a microbenchmark in Java (despite Aleksey’s excellent work). Languages like Go make it much easier.
  3. They argue for long-running benchmarks on the basis that a single event (e.g., a context switch) will have a larger relative effect on a short benchmark than on a long benchmark. My view is that, as far as microbenchmarks are concerned, you want to idealize away outlier events (like a context switch), that is, you do not want them to enter into your reported numbers at all, and that’s difficult to do with a long-running benchmark if you are reporting an aggregate like an average.

    Moreover, if you have a really idealized setup, the minimum running time should be a constant: it is the fastest your processor can do. If you cannot measure that, you are either working on a problem that is hard to benchmark (e.g., involving random memory accesses, involving hard-to-predict branches, and so forth), or you have a non-ideal scenario.

    Of course, if you have a more complicated (non-ideal) setup, as is maybe unavoidable in a language like Java, then it is a different game. I would argue that you should be headed toward “system benchmarks” where you try benchmark a whole system for engineering purposes. The downside is that it is going to be harder to reason about the performance with confidence.

    Thus, when I really want to understand something difficult, even if it arose from Java or JavaScript, I try to reproduce it with a low-level language like C where things are more deterministic. Even that can be ridiculously difficult at times, but it is at least easier.

I would conclude that benchmarking is definitively not a science. But I’m not sure paranoia is the answer, I think we need better, easier tools. We need more visualization. We need more metrics. And, no, we don’t want to wait longer while sipping coffee. That won’t make us any smarter.

Science and Technology links (January 19th, 2018)

  1. The Raspberry Pi 3, a $15-dollar computer that I use for various fun projects, is 15 times more powerful than the Cray-1 supercomputer, but it is 130,000 times lighter. The Cray-1 was worth $9 million in 1977. (Source: Joe Armstrong‏)
  2. Stem cells can be used to replace or modify the behavior of our own cells. It is likely that many breakthrough therapies will involve stem cells. But the production of stem cells is expensive. To solve the cost issue, the Mayo clinic will be producing stem cells in an automated manner for clinical trials.
  3. As we age, we tend to lose hair, and it does not just come back on its own. And if it did, at an old age, you would expect the hair to be white. But it looks like wounds can regrow hair at any age:

    We reported an 80-year-old patient with a large wound on the scalp (…) The patient’s wound healed very well aesthetically. Interestingly, on approximate post wound day 180, a hair was observed to be growing towards the surface and eventually erupted in the center of the wound. The hair remained black at 42-month follow-up. This case demonstrated that neogenesis of hair is possible even in a geriatric patient. (Source)

  4. The Alibaba corporation has developed an artificial intelligence model that scored better than humans in a Stanford University reading and comprehension test. I have not looked into it, but as far as I know, we don’t know how to build computers that can read like human beings do. I mean that we don’t even know how to do it in principle.
  5. Some chameleons have fluorescent bones.
  6. In the novel Rainbows End, sci-fi author Vernor Vinge describe a hero who is recovering from Alzheimer’s. The novel is set in 2025. We are in 2018, and we still have no clue how to halt, let alone cure Alzheimer’s. If we were to cure Alzheimer’s, would the individual be able to recover normal use of his memory? Many people doubt it: they think that the synapses are being destroyed. Research from McGill University suggests that Rainbows End’s vision might be correct. The synapses are still present, even in advanced stages of Alzheimer’s, they are just unable to function. If correct, this means that we might, one day, reverse Alzheimer’s.
  7. As I suspected all throughout 2017, not all his well in Hollywood:

    While the average price of a movie ticket in the U.S. rose to $8.97 in 2017, an increase of 3.69 percent, total domestic box office in North America dropped by 2.55 percent to $11.091 billion, according to information released Wednesday by the National Association of Theatre Owners. Despite the increase in ticket prices, the overall decline in ticket revenue was caused by a drop in overall admissions, which fell by 6.03 percent to 1.236 billion.

  8. Birth order with a family seems to matter quite a bit:

    (…) we found strong birth order effects on IQ that are present when we look within families. Later-born children have lower IQs, on average, and these differences are quite large. For example, the difference between firstborn and second-born average IQ is on the order of one-fifth of a standard deviation

    The difference in educational attainment between the first child and the fifth child in a five-child family is roughly equal to the difference between the educational attainment of blacks and whites calculated from the 2000 Census.

    Firstborn children are significantly more likely to be employed and to work as top managers (…) firstborn children are more likely to be in occupations requiring sociability, leadership ability, conscientiousness, agreeableness, emotional stability, extraversion, and openness.

    later-borns are less likely to consider themselves to be in good health, and measures of mental health generally decline with birth order