Swift versus Java : the bitset performance test

I claimed online that the performance of Apple’s Swift was not yet on par with Java. People asked me to back my claim with numbers.

I decided to construct one test based on bitsets. A bitset is a fast data structure to implement sets of integers.

Java comes with its own bitset class called java.util.BitSet. I wrote three tests for it: the time it takes to add a million integers in sequence to a bitset, the time it takes to count how many integers are present in the bitset, and the time it takes to iterate through the integers.

Here are the results in Java :

git clone https://github.com/lemire/microbenchmarks.git
cd microbenchmarks
mvn clean install
java -cp target/microbenchmarks-0.0.1-jar-with-dependencies.jar me.lemire.microbenchmarks.bitset.Bitset

Benchmark Mode Samples Score Error Units
m.l.m.b.Bitset.construct avgt 5 0.008 ± 0.002 s/op
m.l.m.b.Bitset.count avgt 5 0.001 ± 0.000 s/op
m.l.m.b.Bitset.iterate avgt 5 0.005 ± 0.001 s/op

So all tests take at most a few milliseconds. Good.

Next I did my best to reproduce the same tests in Swift:

git clone https://github.com/lemire/SwiftBitset.git
cd SwiftBitset

swift test -Xswiftc -O -Xswiftc -Ounchecked -s BitsetTests.BitsetTests/testAddPerformance
Test Case '-[BitsetTests.BitsetTests testAddPerformance]' measured [Time, seconds] average: 0.019

swift test -Xswiftc -O -Xswiftc -Ounchecked -s BitsetTests.BitsetTests/testCountPerformance
Test Case '-[BitsetTests.BitsetTests testCountPerformance]' measured [Time, seconds] average: 0.004

swift test -Xswiftc -O -Xswiftc -Ounchecked  -s BitsetTests.BitsetTests/testIteratorPerformance
Test Case '-[BitsetTests.BitsetTests testIteratorPerformance]' measured [Time, seconds] average: 0.010

These tests are rough. I got these numbers of my laptop, without even trying to keep various noise factors in check. Notice however that I disable bound checking in Swift, but not in Java, thus giving something of an unfair advantage to Swift.

But as is evident, SwiftBitset can be 2 times slower than Java’s BitSet. Not a small difference.

SwiftBitset is brand new whereas Java’s BitSet underwent thorough review over the years. It is likely that SwiftBitset leaves at least some performance on the table. So we could probably close some of the performance gap with better code.

Nevertheless, it does not make me hopeful regarding Swift’s performance compared to Java, at least for the type of work I care about. But Swift hopefully uses less memory which might be important on mobile devices.

Some people seem to claim that Swift gives iOS an advantage over android and its use of Java. I’d like to see the numbers.

Update. I decided to add a Go benchmark and a C benchmark. Here are my results:

languagecreatecountiterate
Java’s BitSet8 ms1 ms5 ms
C’s cbitset11 ms0 ms4 ms
SwiftBitset19 ms4 ms10 ms
Go’s bitset18 ms3 ms13 ms

It might seem surprising to some, but Java can be a fast language. (The C implementation suffers from my Mac’s poor “malloc” performance. Results would be vastly different under Linux.)

My thoughts on Swift

Swift is a new programming language produced by Apple for its iOS devices (primarily the iPhone). It first appeared two years ago and it has been gaining popularity quickly.

Before Swift, Apple programmers were “stuck” with Objective-C. Objective-C is old and hardly ever used outside the Apple ecosystem.

Swift, at least the core language, is fully available on Linux. There are rumours that it should soon become available for Windows.

If you want to build mobile apps, then learning Swift is probably wise. But setting this context aside, how does Swift stands on its own?

To find out, I wrote and published a small Bitset library in Swift 3.0.

  • Like most recent languages (e.g., Rust, Go), Swift 3.0 comes with standard and universal tools to test, build and manage dependencies. In contrast, languages like C, C++ or Java depend on additional tools that are not integrated in the language per se. There is no reason, in 2016, to not include unit testing, benchmarking and dependency management as part of a programming language itself. Swift shines in this respect.
  • Swift feels a lot like Java. It should be easy for Java programmers to learn the language. Swift passes classes per reference and everything else per value though there is an inout parameter annotation to override the default. The ability to turn what would have been a pass-by-reference class in Java and make it a pass-by-value struct opens up optimization opportunities in Swift.
  • In Java, all strings are immutable. In Swift, strings can be either immutable or mutable. I suspect that this may give Swift a performance advantage in some cases.
  • Swift supports automatic type inference which is meant to give the syntax a “high-level” look compared to Java. I am not entirely convinced that it is actual progress in practice.
  • Swift uses automatic reference counting instead of a more Java-like garbage collection. Presumably, this means fewer long pauses which might be advantageous when latency is a problem (as is the case in some user-facing applications). Hopefully, it should also translate into lower memory usage in most cases. For the programmer, it appears to be more or less transparent.
  • Swift has operator overloading like C++. It might even be more powerful than C++ in the sense that you can create your own operators on the fly.
  • By default, Swift “crashes” when an operation overflows (like casting, multiplication, addition…). The intention is noble but I am not sure crashing applications in production is a good default especially if it comes with a performance penalty. Swift also “crashes” when you try to allocate too much memory, with apparently no way for the application to recover sanely. Again, I am not sure why it is a good default though maybe it is.
  • It looks like it is easy to link against C libraries. Unlike Go, I suspect that the performance will be good.
  • Available benchmarks so far indicate that Swift is slower than languages like Go and Java, which are themselves slower than C. So Swift might not be a wise choice for high-performance programming.

My verdict? Swift compares favourably with Java. I’d be happy to program in Swift if I needed to build an iOS app.

Would I use Swift for other tasks? There is a lot of talk about using Swift to build web applications. I am not sure.

The rise of dark circuits

The latest iPhone 7 from Apple has more computing peak power than most laptops. Apple pulled this off using a technology called ARM big.LITTLE where half of the processor is only used when high performance is needed, otherwise it remains idle.

That’s hardly the sole example of a processor with parts that remain idle most of the time. For example, all recent desktop Intel processors come with an “Intel processor graphics” that can process video, replace a graphics card and so forth. It uses roughly half the silicon of your core processor but in many PCs where there is either no display or where there is a graphics card, most of this silicon is unused most of the time.

If you stop to think about it, it is somewhat remarkable. Silicon processors have gotten so cheap that we can afford to leave much of the silicon unused.

In contrast, much of the progress in computing has to do with miniaturization. Smaller transistors use less power, are cheaper to mass-produce and can enable processors running at a higher frequency. Yet transistors in the CPU of your computer are already only dozens of atoms in diameters. Intel has thousands of smart engineers, but none of them can make a silicon-based transistor with less than one atom. So we are about to hit a wall… a physical wall. Some would argue that this wall is already upon us. We can create wider processors, processors with fancier instructions, processors with more cores… specialized processors… but we have a really hard time squeezing out more conventional performance out of single cores.

You can expect companies like Intel to provide us with more efficient processors the conventional manner (by miniaturizing silicon transistors) up till 2020, and maybe at the extreme limit up till 2025… but then it is game over. We may buy a few extra years by going beyond silicon… but nobody is talking yet about subatomic computing.

I should caution you against excessive pessimism. Currently, for $15, you can buy a Raspberry Pi 3 computer which is probably closer than you imagine to the power of your laptop. In five years, the successor of the Raspberry Pi might still sell for $15 but be just as fast as the iPhone 7… and be faster than most laptops sold today. This means that a $30 light bulb might have the computational power of a small server in today’s terms. So we are not about to run out of computing power… not yet…

Still… where is the next frontier?

We can build 3D processors, to squeeze more transistors into a smaller area… But this only helps you so much if each transistor still uses the same power. We can’t pump more and more power into processors.

You might argue that we can cool chips better or use more powerful batteries… but none of this helps us if we have to grow the energy usage exponentially. Granted, we might be able to heat our homes with computers, at least those of us living in cold regions… but who wants an iPhone that burns through your skin?

How does our brain work despite these limitations? Our neurons are large and we have many of them… much more than we have transistors in any computer. The total computing power of our brain far exceeds the computing power of most powerful silicon processor ever made… How do we not burst into flame? The secret is that our neurons are not all firing at the same time billions of times per second.

You might have heard that we only use 10% of our brain. Then you have been told that this is a myth. There is even a Wikipedia page about this “myth”. But it is not a myth. At any one time, you are probably using less than 1% of your brain:

The cost of a single spike is high, and this severely limits, possibly to fewer than 1%, the number of neurons that can be substantially active concurrently. The high cost of spikes requires the brain not only to use representational codes that rely on very few active neurons, but also to allocate its energy resources flexibly among cortical regions according to task demand. (Lennie, 2003)

So, the truth is that you are not even using 10% of your brain… more like 1%… Your brain is in constant power-saving mode.

This, I should add, can make us optimistic about intelligence enhancement technologies. It seems entirely possible to force the brain into a higher level of activity, with the trade-off that it might use more energy and generate more heat. For our ancestors, energy was scarce and the weather could be torrid. We can afford to control our temperature, and we overeat.

But, even so, there is no way you could get half of your neurons firing simultaneously. Our biology could not sustain it. We would go into shock.

It stands to reason that our computers must follow the same pattern. We can build ever larger chips, with densely packed transistors… but most of these circuits must remain inactive most of the time… that’s what they call “dark silicon“. “Dark silicon” assumes that our technology has to be “silicon-based”, clearly something that may change in the near future, so let us use the term “dark circuits” instead.

Pause to consider: it means that in the near future, you will buy a computer made of circuits that remain mostly inactive most of the time. In fact, we might imagine a law of the sort…

The percentage of dark circuits will double every two years in commodity computers.

That sounds a bit crazy. This means that one day, we might use only 1% of the circuits in your processors at any one time—not unlike our brain. Though it sounds crazy, we will see our first effect of this “law” with the rise of non-volatile memory. Your current computer relies on volatile memory made of transistors that must be constantly “charged” to remain active. As the transistors stop shrinking, this means that the energy usage of RAM per byte will plateau. Hence, the energy usage due to memory will start growing exponentially, assuming that the amount of memory in systems grows exponentially. Exponentially growing energy usage is not good. So we will switch, in part or in full, to non-volatile memory, and that’s an example of “dark circuits”. It is often called “dark memory”.

You may assume that memory systems in a computer do not use much energy, but by several accounts, they often account for half of the energy usage because moving data is expensive. If we are to have computers with gigantic memory capacities, we cannot keep moving most of the data most of the time.

In this hypothetical future, what might programming look like? You have lots and lots of fast memory. You have lots and lots of efficient circuits capable of various computations. But we must increasingly “budget” our memory transfers and accesses. Moving data takes energy and creates heat. Moreover, though you might have gigantic computational power, you cannot afford to keep it on for long, because you will either run out of energy or overheat your systems.

Programming might start to sound a lot like biology.

Credit: This blog post benefited from an email exchange with Nathan Kurz.

The memory usage of STL containers can be surprising

C++ remains one of the most popular languages today. One of the benefits of C++ is the built-in STL containers offering the standard data structures like vector, list, map, set. They are clean, well tested and well documented.

If all you do is program in C++ all day, you might take STL for granted, but more recent languages like Go or JavaScript do not have anything close to STL built-in. The fact that every C++ compiler comes with a decent set of STL containers is just very convenient.

STL containers have a few downsides. Getting the very best performance out of them can be tricky in part because they introduce so much abstraction.

Another potential problem with them is their memory usage. This is a “silent” problem that will only affect you some of the time, but when it does, it may come as a complete surprise.

I was giving a talk once to a group of developers, and we were joking about the memory usage of modern data structures. I was telling the audience that using close to 32 bits per 32-bit value stored in a container was pretty good sometimes. The organizer joked that one should not be surprised to use 32 bytes per 32-bit integer in Java. Actually, I don’t think the organizer was joking… he was being serious… but the audience thought he was joking.

I wrote a blog post showing that each “Integer” in Java stored in an array uses 20 bytes and that each entry in an Integer-Integer map could use 80 bytes.

Java is sometimes ridiculous in its memory usage. C++ is better, thankfully. But it is still not nearly as economical as you might expect.

I wrote a small test. Results will vary depending on your compiler, standard library, the size of the container, and so forth… You should run your own tests… Still, here are some numbers I got on my Mac:

Storage cost in bytes per 32-bit entry
STL containerStorage
std::vector4
std::deque8
std::list24
std::set32
std::unordered_set36

(My Linux box gives slightly different numbers but the conclusion is the same.)

So there is no surprise regarding std::vector. It uses 4 bytes to store each 4 byte elements. It is very efficient.

However, both std::set and std::unordered_set use nearly an order of magnitude more memory than would be strictly necessary.

The problem with the level of abstraction offered by C++ is that you can be completly unaware of how much memory you are using.

The new C standards are worth it

The C language is one of the oldest among the popular languages in use today. C is a conservative language.

The good news is that the language is aging well and it has been rejuvenated by the latest standards. The C99 and C11 standards bring many niceties…

  • Fixed-length types such as uint32_t for unsigned 32-bit integers.
  • A Boolean type called bool.
  • Actual inline functions.
  • Support for the restrict keyword.
  • Builtin support for memory alignment (stdalign.h).
  • Full support for unicode strings.
  • Mixing variable declaration and code.
  • Designated initializers (e.g., Point p = { .x = 0, .y = 0};).
  • Compound literals (e.g., int y[] = (int []) {1, 2, 3, 4};).
  • Multi-thread support (via threads.h).

These changes make the code more portable, easier to maintain and more readable.

The new C standards are also widely supported (clang, gcc, Intel). This year, Microsoft made it possible to use the clang compiler within Visual Studio which enables compiling C11-compliant code in Visual Studio.

Starting high school in 2016

My oldest boy started high school this year. He goes to an accessible private school nearby. We went to a parent’s meeting last night.

  • Personal electronics is banned from the school. So no smartphone. No portable game console. Last year, the principal hinted that the ban was unenforceable. This year we got a message asking for our help in enforcing the ban.

    This seems awfully hypocritical. All the parents show up to school with smartphones in their hands. They check their emails every five minutes. Some don’t even have the decency to turn this phones off, so you get ringing in the middle of a talk.

    The school is organizing a symposium on “the digital” where actual university professors are going to talk about technology and its impact. I wonder whether they will talk about the fact that half the parents are smartphone addicts? Probably not. If there is a problem, it must be with the kids.

  • The French teacher (it is a French school) encourages the use of the dictionary. It seems like a big deal to him. A parent asked whether it was ok if the kid used a tablet to look up words. The teacher said it was… he admitted to using a tablet himself (at home)… but he added that he wanted to promote the feel of paper.

    I don’t know about you but I simply never look up anything in a paper dictionary these days. It is hypocritical to ask teenagers to do so. None of them will ever use a paper dictionary in the real world.

    He says he wants to promote reading. But, of course, no ebook is in sight. What is meant by “reading” is “read an actual paper book”.

    Newsflash: we have never written and read more than we do today… but very little of it is on paper.

  • Though some teachers hint at some discomfort, all the information (grades, assignments) end up on the school’s web portal. Teachers can send assignments by the portal and students can reply back with their completed assignments.

    One of the middle-aged parents, no doubt someone working for a large organization, asked whether her 12-year-old would get training in the use of the web portal. The teacher routed around the question until he ended up telling the parent that “yes, we make sure students can use the portal”. The uncomfortable truth is that kids don’t need training to use a web portal in 2016, and only a minority of middle-aged folks do.

Summary. Though my son’s school is probably ahead of most… it is still presenting a backward picture of the world. It is a place where smartphones are still in the future. It is a place where you use paper dictionaries.

Function signature: how do you order parameters?

Most programming languages force you to order your function parameters. Getting them wrong might break your code.

What is the most natural way to order the parameters?

A language should aim to be generally consistent to minimize surprises. For example, in most languages, to copy the value of the variable source into variable destination, you’d write:

destination = source;

So it makes sense that, in C, we have the following copy function:

memcpy(void *destination, const void *source, size_t n);

Go, the new language designed by some of the early C programmers, follows the same tradition:

func copy(destination, source []T) int

Meanwhile, Java is arguably backward with the arraycopy function:

void arraycopy(Object source,
             int sourcePos,
             Object destination,
             int destinationPos,
             int length);

The justification for Java’s order is that, in English, you say that you copy from the source to the destination. Nobody says “I copied to x the value y“. But then why don’t we write a value copy from y to x as follows?

y->x;

Sadly, Java is not even consistent in being backward. For example, to copy data from the ByteBuffer destination to the ByteBuffer source, you have to write:

destination.put(source);

This is, in my opinion, the correct order, but if you are used to having the source being first, you might be confused by that particular ordering.

Ok. So what about working with a data structure, like a hash table? A hash table is not very different from an array conceptually, and we set array values in this manner:

array[index] = value

So I would argue that the proper function signature for the insertion of a key-value in a hash table should be something of the sort:

insert(hashtable_type hashtable, 
        const key_type key, 
        const value_type value);

In Go, this is how you add an element to a Heap:

func Push(h Interface, x interface{})

More generally, when acting on a data structure, I would argue that the data structure being modified (if any) should be the first parameter.

What if you want to implement an in-place operation, for example, maybe you want to compute the bitwise AND of x and y, and put the result in x:

x &= y

And this how you’d implement it in C++, putting the x before the y:

type operator&(type & x, const type& y) {
   return x &= y;
}

I wonder whether it would be possible to produce a tool that detects confusing or inconsistent parameter ordering?

Are there too many people?

Without immigration, most developed countries would face massive depopulation. In fact, half the population of the Earth lives in countries with sub-replacement fertility.

The threshold for a sustained population is a fertility rate of 2.1. Taiwan South Korean and Singapore are at 1.2, Japan and Germany are at 1.5, the whole European Union and Canada are at 1.6, Australia, the United Kingdom and the USA are at 1.8.

Yet Earth’s population is still growing. Where is the population growth? Niger, Sudan, Somalia, Mozambique, Uganda, Congo, Afghanistan… Or, to put it another way, in the poor countries.

If the current trends are maintained, by 2050, Japan will count about 100 million inhabitants or over 25 million fewer than the current count. By then, there will be many more Japanese in their seventies than in any other 10-year age group.

In Europe, Germany is currently the largest countries with 80 million inhabitants. If the current trend continues, by 2050, it should count no more than 75 million people. Germany is going to need millions of people over the next two decades just to sustain its population.

What about China? They stand at 1.4 billion people. By 2050, they should have fallen to 1.3 billion people… It is no wonder that China recently dropped its one-child policy.

Of course, countries like the USA, Canada and France are still growing… but that’s largely because of immigration, often from countries where people reproduce more readily.

Won’t better health and longevity lead to renewed population growth in the rich countries? No. Excluding immigration, population growth is almost entirely determined by fertility. That is, what matters is how soon and how many children women have. Even if Japan’s bet on regenerative medicine delivers exceptional benefits, it won’t make much of a dent on the population curve. If you could, somehow, multiply your lifespan without having any more children, you’d only add “1” to the population count. And improved health and medicine decrease your effective fertility. Healthier women who receive great medical care for themselves and their children tend to have children later, if at all, and tend to have fewer of them.

You’d think that being few in a rich country is not a major problem. But depopulation means closing down rural towns. It means fewer scientists, fewer nurses… And because we have not yet a handle on age-related diseases, it means more retirees needing help with fewer working individual per capita. Worried about imminent depopulation, Italy recently launched an ad campaign reminding ladies to hurry up and have babies. In Denmark, they teach pupils about the need to have more children.

It is true that a child is a mouth to feed. But a child might grow up to build new technology, to care for the sick, and so forth.

At least as far as the richest third of humanity is concerned, there are not too many people, and there might be too few.

The rate-of-living theory is wrong

The rate-of-living theory is popular on the Internet. The intuition is that all animals are born with some “budget” that they burn out over time according to their rate of energy expenditure. A proxy for how much energy you spend is your heart rate.

Anyhow. You are a candle. The harder you live, the shorter you will live. You have a fixed budget, spend it wisely. Live slow, live long; live fast, live short.

The famous science-fiction writer David Brin penned an essay entitled Want to Live Forever? to support the theory :

Elephants live much longer than mice, but their hearts also beat far slower, so the total allotment stays remarkably similar.

Brin’s essay basically says that evolution tried to maximize our lifespan, but that we are hitting the limits set by the rate-of-living theory.

This sounds good and intuitive.

But is it true? Notice how Brin refers only to mammals. What about birds? Well, parrots live to be 75 yet their hearts can beat up to 600 times per minute. And some animals (like the jellyfish and the hydra) are considered “immortal”.

So maybe there is something special about mammals that makes us burn like candles with each heart beat.

Maybe. But we know that large dogs live much shorter lives than small dogs. The difference is large. A Yorkshire Terrier lives up to 17 to 20 years old. A Great Dane is expected to live only 6 to 8 years. Yet, there is no correlation between body size and heart rate in dogs. So big dogs get far fewer heart beats than small dogs.

So maybe it is not heart rates that matter but your energy expenditure.

Well. Athletes certainly do “live fast”, building up a large muscle mass and burning lots of energy even while at rest. Do athletes live shorter? We have no evidence of this effect.

What about child rearing? For a woman, giving birth and raising kids is certainly a major drain. Except that women who have children live longer.

Smart and rich people live longer, but people who retire ealier do not.

The question was investigated thoroughly 25 years ago by Austad and Fischer and they demonstrated that the rate-of-living theory is wrong:

Our results fail to support a rate of living theory of aging in either a weak or strong form. The strong form suggests that mass-specific energy expenditure per life span would be approximately equivalent across species. For 164 mammalian species, energy expenditure per lifetime ranges from 39 to 1102 kcal/g/life span. Within bats, there is nearly a fourfold range of lifetime energy expenditure while among marsupials the range is nearly tenfold. Thus even within mammalian orders, this generalization seems not to hold.

A weaker form of the same idea holds that lifespan should be generally inversely related to metabolic rate. This notion predicts that: (a) marsupials should be longer-lived than similar sized eutherians; (b) heterothermic species should be longer-lived than similar-sized homeotherms, and (c) homeothermic bats should have life spans approximately equal to similar-sized nonflying eutherians. Our data fail to support any of these predictions

These results suggest that at least between species, there is an uncertain relationship between basal metabolic rate and aging. Therefore the finding that a proven life-extending treatment such as dietary restriction in laboratory rats and mice does not routinely reduce metabolic rate is perhaps less
surprising than it otherwise might be.

(…)

It is clear from the preceding that there is no simple relationship between mammalian longevity and metabolic
rate. Species with low or high metabolic rates may have evolved short or long life spans, depending upon their ecology.

At this point, one might be tempted to try to find interventions that “slow your life” and also extend your lifespan. But these examples would not rescue the theory. Indeed, you might have a theory that all animals living in the ocean are fishes. If I point out that your theory is wrong because there are whales, you can’t argue back by giving me examples of fishes that live in the ocean.

The lesson is that animals are not candles. You can work and play hard, and yet live long. You have no excuse to be boring and slow.

Further reading : Mitteldorf’s Cracking the Aging Code, Fossel’s The Telomerase Revolution and Wood’s The Abolition of Aging.

Innovation as a Fringe Activity

What do these people have in common: Marconi, Alexander Graham Bell, and the Steves Wozniak and Jobs? At least one commonality is that approximately nobody listened to them or cared about what they were doing until there was simply no way to ignore them anymore. In effect, they were living and working on the fringes of society and the only people paying them heed were the relevant subcultures that sprung up around them. Without them, we might still have had the various communications revolutions that came with telephones, radios (and their offspring, cell phones), computers and their offspring, smartphones. Even so, it is unfathomable that the world we know today would exist without innovators working on the fringes of society. From where we stand today, it’s easy to forget that almost every advance was not an obvious and perhaps necessary step. In fact, everything from automobiles to smartphones arose first in subcultures that were initially dismissed and often ridiculed.

Everybody wants innovation. Well, maybe not everybody, but I’m guessing that most people, and certainly most businesses, want new and better things and new and better ways of doing things. I think it’s safe to say that there are some of people and businesses who are interested in going beyond the creation of the merely new or improved product or service to the creation of new markets. Yet, oddly, most people and businesses and governments continue to dismiss, ridicule, or legislate against the activities of innovative subcultures. The most recent such examples come from digital media. The music lovers in the computing subculture were quietly sharing music among themselves until Napster came along. Napster was proof positive that there was a large market for digital music distribution. How did the music industry respond? Not with their own services and low prices and easy to use systems, but with legal challenges, lawsuits, and lobbying for new legislation and enforcement. There was literally nothing stopping the recording industry associations and companies from creating an iTunes-like product that would have made Napster (and maybe Apple!) irrelevant.

Did the movie and television industries learn anything from what happened with music? Of course not. We are still in the throes of a mighty battle to keep programming out of the hands of the consumer. The consumer is winning. The people in charge still haven’t figured out that the longer they fight the changes, the easier it will get for the consumer to get what they want for free. All they need to do is make the content available at a reasonable price. No geographical restrictions based on old distribution models. No anti-piracy messages or ads to tempt people into the pirate market for pure content. A variety of qualities available so that each consumer can get what’s right for their system without having to go to the pirate market for what they need. All of those things are possible today and have been possible for years, but the industry keeps fighting a battle that all before have lost.

The Internet has made it possible for subcultures to pop up almost overnight in response to some cool new thing (i.e. innovation). The Internet has made it possible for subcultures to go mainstream almost as quickly as they appear. Those are key innovations that are rarely recognized as such. Anyone who does not acknowledge those innovations and the associated rise in subcultures is destined for the dust heap. That applies to individuals who fail to adjust their skills, companies who fail to adopt new ways of doing things, and it will also apply to governments who fail to follow their constituents into the future.

What is innovation and why should we care? Depending on your objectives or how you are affected, innovation could be something as simple as letting people pump their own gas or as complex as designing and building a personal computer. We should care because all progress, all advance, all improvement comes from doing something new or at least doing something old in a new way. If you are a business owner, you need to be at least paying attention to innovative products, services, and processes so that you can judge how your business will be affected. If you are not running a business, you are probably working for one and being unaware of innovations in your field means that you risk becoming obsolete along with the skills you now have. Between 1970 and 2010, 40 years, computers, especially home computers, went from deep in fringedom to as common as televisions. Despite this unprecedented shift, home, and small business computing were still fringe activities and generally ridiculed for about 20 years.

Here is my favorite story of how innovation has affected an extremely large market segment: mail order shopping. In Canada, Eaton’s was once the market leader of mail order shopping. The catalogue at the beginning of the 20th Century described products from socks and wedding dresses to tools and houses. Yes, houses. You could order a house which would then be delivered as a kind of kit containing all the necessary lumber, doors, windows, etc. along with the plans. I don’t know how many actually did the building themselves, but that’s not really the point. Think about it: you got the Eaton’s Catalogue delivered by mail, you picked out a house and maybe some socks, you sent that order to Eaton’s by mail, and some time later your socks and house were delivered. Then Sears stepped in and added a distribution network. Roads and trucks were both much improved over the decades. In addition to the same kind of catalogue sales that Eaton’s was doing (maybe not houses), they set up little depots in almost every town and a great many villages. A depot might have been something as simple as a corner in an existing business, but there were catalogues, you could place your order without the cost of an envelope and stamp, and shipping was either free or greatly reduced in price. On top of that, there would be a few items on display so that you could actually inspect what you were ordering. As if that wasn’t enough, returns were as easy as if the depot was a ‘real’ store, making it possible to order a few items of clothing, try them on at home, and return what you didn’t want to the depot. Some depots even had fitting rooms to save the trip home. Needless to say, Sears eventually pushed Eaton’s out of the mail order business and Eaton’s ultimately closed their doors. The Sears depot fed by a fleet of trucks was not just an innovation; it was a disruptive innovation that ultimately forced their main competitor in the field out of business. Sears did not invent the road or the truck, but they did notice that the roads were getting better, road networks were expanding, and trucking was becoming a booming business. That story is not over yet. The Internet came along. At least initially, farmers and villagers had the same level of Internet access as city dwellers because everyone was just using acoustic modems. Note that those farmers and villagers were very important, because they were the heart of the mail order business. Along with the rise of the Internet, people started conducting business online in what can only be called an updated mail order system where the catalogue and orders moved across wires instead of through the mail. What Sears did next was, in my opinion, nothing short of amazing. They ignored the Internet and not just out of ignorance. There were any number of fringe players suggesting that they set up Internet-based catalogue shopping, and I’m willing to bet that some of those people were Sears employees. It was a deliberate dismissal of the technology and potential. They could have had computers in their little depots to accommodate those not quite ready to join the Internet revolution. They already had the distribution network in place, something that Amazon is only just getting around to dealing with and which anyone smaller than Amazon can’t even dream of. I can see how Amazon may have still managed to take over books, but I can’t imagine that they would be the retail powerhouse they are now if Sears had bothered to look into what the Internet had to offer. I’m convinced that we’d all be shopping at Sears instead of Amazon and shipping costs would not be such a major factor in purchase decisions.

Contrast that with Glen-L and Lee Valley. They are both relatively small businesses that did a lot of mail order business. In the case of Glen-L, that was their only business. If you wanted to buy plans to build your own boat, Glen-L was one of the major players and you had no choice but to deal with them through mail or phone orders. Lee Valley had storefronts, but mostly as a way for people to handle the tools they were interested in and for people to get advice from relevant professionals. Both were quick to embrace the Internet and both are still counted among the leaders in their markets.

Step back a bit and think about that. Mail order used to be something that virtually everyone needed to do in order to get products unavailable locally. Then it was something used only by people outside major centres, and even that was dwindling along with the rise in cheap, fast personal transportation. Then along comes online shopping and now the modern equivalent to mail order threatens the very existence of regular stores.

Amazingly, businesses still generally ignore, often willfully, the innovations that are happening around them and suffer accordingly. From those stories, we should have learned that we don’t actually need to be innovators in the sense of creating completely new things like the Internet in order to benefit from innovation. However, if we pay attention, we might be able to completely disrupt our markets while not even really changing much about how we do business. If, like Lee Valley, Sears had embraced the Internet as an alternative way to put out their catalogue and take orders, almost nothing about their actual business would have changed, but it’s likely that Amazon would not be the powerhouse they are today and Sears would not be closing stores, trying to reposition themselves in the market, and just generally sliding off into irrelevance.

In short, and this is my real thesis, you don’t need to be the Steves doing whatever they were doing that led to Apple being the company we see today. No, you don’t need to be one of the Steves, you just need to pay attention to what the Steves are up to and see how that might affect you. Better, you should be talking to them to see if they have any idea how what they’re doing might affect you.

IBM had already transformed themselves from an office equipment manufacturer and supplier to one of the leading technology companies of the era, so they knew more than a little bit about innovation and how to foster it. They employed people like theoretical mathematician Benoit Mandelbrot, he of the Mandelbrot Set and formaliser of fractal geometry, to do whatever it was he was doing. It’s a safe bet that approximately nobody at IBM including C-level executives understood his work, but you can be sure that they were aware of what he was doing and trying to find ways to profit from it. They knew from experience that if they hired enough smart people and gave them the freedom to do strange and unusual things, enough of it would eventually more than pay for itself. When IBM saw what Apple and other upstarts were doing, they recognised an important new market. In very short order, they had their engineers on the case and created the IBM-PC. PC stood for personal computer and it was marketed to homes and small businesses. They didn’t invent the personal computer, they just paid attention to those who did.

Some large businesses already had computerised systems, probably just in accounting, but a few people were actually crazy enough to bring in their own PCs or spend their own money on PCs for their departments or their jobs. Most of that was shut down quickly and forcefully, but a few companies allowed it as long as the job still got done and a rare few actually supported those initiatives by allowing the use of department funds for the purchase of PCs.

Let me repeat that.

Some people were actually crazy enough to bring in their own PCs or spend their own money on PCs for their departments or their jobs. Most of that was shut down quickly and forcefully, but a few companies allowed it as long as the job still got done and a rare few actually supported those initiatives by allowing the use of department funds for the purchase of PCs. If that is not a fringe activity, nothing is. Those are probably the two most important sentences in this essay. Remember them, because I’ll be coming back to them.

Smartphones. A lot of people have them. What is the one place where smartphones are frowned upon and frequently prohibited? Work. Yes, some industries and some companies have been making limited use of them, but in general, bringing a smartphone to work is like bringing a football. It’s not really a problem, but it better stay in your locker during work hours. One of the most common complaints I hear from employers and managers is how difficult it is to find a young worker who isn’t glued to their phone.

Remember these sentences?

Some people were actually crazy enough to bring in their own PCs or spend their own money on PCs for their departments or their jobs. Most of that was shut down quickly and forcefully, but a few companies allowed it as long as the job still got done and a rare few actually supported those initiatives by allowing the use of department funds for the purchase of PCs. Instead of reading “personal computer” everywhere you see “PC”, try reading “pocket computer”. To a very good approximation, everyone under 30 is carrying an Internet-connected pocket computer everywhere they go and most businesses so severely curtail their use that they might as well be left at home. How about this instead: everybody with a pocket computer—okay, smartphone—is allowed to use it as they see fit as long as two conditions are met: The employee submits to productivity monitoring of some kind. Most successful businesses are already pretty good at monitoring productivity, so ‘submitting’ to it is already a given if you want a job at all.

The employee has to sit down once a month to explain how they are using the smartphone to improve efficiency, effectiveness, or just simply make the job more pleasant. It’s time for a couple of thought experiments. First, imagine that you own a restaurant and that you are choosing to allow unrestricted use of smartphones under those conditions. If you are having trouble imagining the outcome, try answering the following questions:

How long will it take for the wait-staff to use their smartphones instead of the little order pads and pens so that they can text the orders to the kitchen? How will that affect costs? How will that affect efficiency, given that the only time they go to the kitchen is to check on and pick up orders?

How long until the kitchen staff starts texting the wait-staff for clarification and orders ready to be picked up? Again, how will that affect efficiency and effectiveness? How long until regular customers start texting their favourite wait-staff to book a reservation? How long until wait-staff start texting regulars when a favourite meal is selected as the daily special?

How long until the customers start placing their own orders instead of waiting for someone to come to their table? How does that merger of take-out and in-house orders affect your business? Does that even work at all? If it just causes pandemonium, can you find a way through the mess? How long until one of the staff or a friend creates a simple app to streamline the system that evolves?

What happens if you are the only restaurant in town doing that? Will you drive customers away? Will you attract new customers? Assuming it attracts more customers than it drives away, is it more profitable to keep it to yourself or to market the system to other restaurants, including competitors? What happens if everyone starts doing it? Can you create or maintain a successful restaurant business by catering to those who value the personal service of a waiter or waitress?

San Francisco was basically ground-zero for Google Glass. The general public either ridiculed or feared the technology. People wearing Google Glasses were called ‘glassholes’. They were routinely banned from restaurants, bars, and other private places where the public mingled. Critically, they were also banned by most employers. Imagine your restaurant in San Francisco. What would have happened to business if you had welcomed that subculture? What would have happened to business if you had gone beyond merely welcoming them, but actually did things that took advantage of the fact that your customers were wearing Google Glasses? What would happen if you gave your staff Google Glasses and then had them spend time with those early adopters learning how to apply the technology to their jobs?

Now imagine that you have a manufacturing plant with an assembly line. How long until the welders are sending the drafting department photos of a design problem? Or letting inventory know of a shortage? Or a scheduled maintenance notice pops up in Google Glass when someone happened to look at a machine that was due for service? Or…

Everything I’ve said comes down to one simple thing. New things come from doing new things and those doing new things are usually part of one subculture or another. They are, in some way, on the fringes. As long as you keep doing things the same way, you will get the same results. You might think that’s just fine and the way things should be, but don’t be surprised if something comes along to destroy it. If you find a way to let people find new ways of doing things at any level, you will have a much better chance of surviving sweeping changes in whatever industry you are in.

For example, if you are a truck driver or own a trucking company, you better be thinking about what happens when the truck itself can drive itself from the outskirts of one city to the outskirts of another. It will look like a slow start because it will be limited to restricted access divided highways in places with no snow, but once it starts, it will transform the industry in a decade or two. Think about where computers were in 1984. Basically governments, very large companies and a few basements and schools had computers and they were used only in limited ways for specific purposes. Importantly, those with a computer at home were routinely ridiculed or at least dismissed. A decade later, almost every shipping and receiving department had a computer, as did virtually every trucking company, big and small. The Internet was just starting to make itself known to the general public, again soliciting ridicule and dismissal. How about adding another decade? By 2004 even grandma was at least thinking about using email and it was basically the end of the road for any employee unwilling to learn how to use a computer. As with computers, truckers will definitely not be the only ones affected by self-driving vehicles, so who in your company is thinking about those things and what are you doing to support them in helping your business survive?

If you’re a manufacturer, there is a pretty good chance that you have at least one employee who is part of a ‘maker’ club. Can you offer space or equipment? Can you pay the membership fees as a benefit to any employee who is interested in joining? If you got stopped at wondering what a maker is, then I recommend a quick Internet search. In a nutshell, a maker is to manufacturing and robotics what hotrodders were to automobile repair and design. That’s where most of the real talent was. Whether you were an engineer with Ford, a mechanic at a local dealership, or just an accountant, if you had a passion for cars you were souping them up and making them over in ways that ultimately informed the whole industry. Just as automakers eventually learned to pay at least a little attention to what the hotrodders were up to, so should every manufacturer be paying attention to what the makers are up to.

What is the one thing that ties together all of these successes? Most of the innovation comes from the fringes. If you don’t know who Elon Musk is, you better learn, because if there ever was a fringe actor, he is, and he has transformed banking (PayPal), auctions and shopping (eBay), and is about to transform personal transportation (Tesla). The most important thing to realize about Elon Musk is that he is tapping into subcultures as a way to change the world. The electric car subculture is at least 30 years old, consisting primarily of people converting cars to electric and the few small businesses that they created to serve the other members. Musk is taking their success, refining it, and marketing it to other subcultures (environmentalists, performance driving enthusiasts, luxury driving enthusiasts, etc.) and generally forcing everyone to sit up and take notice. It’s fair to say that, along with Google, he is one of the leaders in self-driving technology. Unlike Google, who is primarily doing research, Musk is putting systems into place in cars that people can buy today. So go out there and find the people who are doing things you just don’t get. Talk to them. Find ways to support them. The current craze is Pokemon Go. Is there a way to use that to attract new business? I don’t know, but I bet the avid players know. What does the technology itself (augmented reality) tell you about what the future holds for your business? Given that it’s basically a database used to overlay relevant entries (in this case a cartoon character) onto the camera display and a way to interact with that data (character), I would guess there are a number of possibilities. Maybe pointing the camera at a machine in your shop will display the service history on that machine. Maybe someone will ultimately transform the factory floor by building the cameras and displays into glasses. (Oh, wait, someone already tried that with Google Glass, but the fringe group that experimented with them were just ‘glassholes’.) Anyway, I’m not the person to ask. The right person to ask is the employee whose job performance is suffering because they’re so engrossed in the game. Yes, you need to address the performance issue, but maybe it’s by getting them to think about what that technology could mean to their job and, by extension, to your company instead of just threatening them with dismissal or imposing an outright ban.

(Credit: This is a guest post by Ron Porter)