Food printing may eventually replicate nature's diversity - but only after we've lost it

by Nicolas Gülzow

When I decided to write my next blog post about the future of food, the first thing that came to my mind was synthesised food for astronauts. Once we understand how any flavour can be replicated, we’ll just squeeze a daily nutrition gel in a plastic bottle, add a fancy flavour like “duck breast in a raspberry-honey sauce” and a cool neon colour, and than consume it everyday because we’re too busy to sit down and eat. Sounds convenient, right? Not really. And that’s one of the reasons why this is very unlikely to happen. People don’t just eat to keep their bodies alive. They eat for pleasure, identity and communality. It explains our very conservative attitude when it comes to food.

Most of us will only enjoy or even try food that is very similar to what we’ve eaten before. In addition, our requirements from food are much more complex than you might think. We still haven’t learnt enough about nutrition to synthesise a meal that not only fuels our bodies but also keeps us healthy and vital in the long run. Take baby formula for example. Although we’ve been working on this for almost two centuries, the recipe isn’t even close to mother’s milk. There is still a substantial amount of work ahead. 

But after we’ve mastered the task of making sure synthetic foods give us the right kind of nutrition, couldn’t 3D food printers be used to make Soylent-style foods look or feel more appetising? Yes, probably. Many experts believe that 3D food printing will revolutionise food production by boosting culinary creativity and nutritional customisability while minimising food waste. Liz von Hasseln, a Director at 3D Systems, recently predicted that food printers “will become a part of the culinary fabric” and that creative experiments, such as the sugar-based creation of garnishes, are just the beginning. Some manufacturers are already dreaming of the customisation of nutrition, allowing fitness junkies and dieticians to individualise the amounts of protein and carbohydrates. But these people expect too much from the near future. Even a simple food item like a tomato would still require millions of different ingredient cartridges to replicate.

Of course, increasing culinary creativity and nutritional customisability is both interesting and beneficial. However, it doesn’t solve the biggest problems that matter today. In fact, modern food production is a major source of the challenges we face. In the US, livestock alone is responsible for half of our antibiotic use, one third of the loads of nitrogen and phosphorus into freshwater resources, and 55% of erosion and sediment. Our planet is being harmed in a way that has negative impacts on people’s health as well as threatening our food security. To put it another way, our food system is both the agent and victim of environmental damages and needs to change as soon as possible. Our society has to focus on rescuing biodiversity rather than on technologies that might be able to replicate what we will otherwise lose.

Our world population is unlikely to stabilise and will probably hit 11 billion by the end of the century. Conventional methods to increase food production can only serve more demand by generating even more social and environmental harm as well as greenhouse gases, further undermining our capability to grow food. According to Helen Browning, the head of the Soil Association, global soils have already been degraded by about 40%. "If we don't take care of our soils, we won't be able to feed people in 50 years." We have to work out a new type of agriculture that is capable of feeding more people in an ethical and social way and at the same time protects the ecosystem upon which we ultimately depend.

Sustainable intensification is one way to do it. It aims to increase food production from existing farmland in a way that can be continued indefinitely into the future. The term describes what needs to be achieved based on the existing environmental, social and cultural context rather than recommending specific methods. Let me give you two reasons why many experts think this is the only way forward. First of all, we need to increase our future outputs through higher yields rather than increases in area of farming land. Although additional land exists, which could be used for agriculture, it consists mainly of forests, wetlands, or grasslands, whose conversion would greatly harm ecosystem services and biodiversity. Secondly, food security requires as much attention to sustainability as to productivity. Business-as-usual will lead to nothing. We need to radically rethink food production to achieve significant reductions in environmental and social impact. 

A lot can be done with well-known methods and technologies, but there is also a great need for more research and innovation. Solum, a Californian based startup that builds software tools to improve agriculture through data analysis, has recently raised $17 million in funding by venture capitalist Andreessen Horowitz. Their platform enables farmers to correlate nutrient measurements and fertiliser application to actual yields, in a constantly improving feedback loop. Over time, the result for farmers should be a circle of increasing crop yields driven by ever-smarter and environmentally sustainable use of fertiliser, water and other resources. Such high-tech approaches should be seen not as a substitute for, but as a complement to existing methods of sustainable agriculture such as permaculture and organic agriculture. No single method on its own is able to achieve sustainability and security in the food system. But combined under the common goal of sustainable intensification, they might enable us to turn the table before our planet cannot recover anymore.

Big data - separating hype from reality: an interview with Matt Kuperholz

Angus Hervey

One the things I like most about being a part of Future Crunch is that I get to meet a lot of very smart people. There's plenty of them around in our hometown of Melbourne. Its unique mix of creative communities, economic prosperity and liveability make it a pretty attractive destination for people who want to work on things that are new and interesting. One of those people is Matt Kuperholz, who I was introduced to through my housemate towards the end of last year. It didn't take long for the two of us to start talking about digital culture; at the time I was reading Jaron Lanier's Who Owns the Future, and was doing a lot of thinking about open-source movements and digital currencies. I was immediately impressed by Matt's ability to get to the heart of these issues. He's someone with the rare ability to talk about the inner workings of the internet in a way that was accessible to a non-technical person like myself, without dumbing it down or losing the complexity. 

At Future Crunch our aim is to provide clear, critical and intelligent analysis of big future trends. We like to get behind the buzzwords and try and really understand what it’s all about. Yes, we're optimistic, but we're not naive (hence the 'critical' part ). One of the biggest buzzwords out there right now is big data and when I thought about doing an expose, Matt was the first person that came to mind. His area of specialisation is in the application of machine learning technologies to data analysis for business, and he's been doing this for top tier companies in Australia for more than 20 years. He's also one of the nicest people you'll meet and has a passion for conscious, human experiences that's inspiring. When I asked him how he'd feel about an interview he happily agreed. We sat down for a cup of coffee a few weeks ago and here's what went down…

I wonder if you could tell me a little more about your background?

I was born in 1972 in South Africa, and have been into technology since as long as I can remember. I was one of those children that took a toaster to bed instead of a teddy bear. I've been into computers from an early age so my decision to work in the field was a deliberate choice. Computers weren't everywhere in the 1970s like they are now but I knew they were going to matter. I got a scholarship to become an actuary (because that's what you did back then if you were good at maths) and studied at the University of Melbourne. But computer science was my passion and it didn't take me long to see that all the actuarial stuff I was doing could be made better with a higher degree of integration with technology.

I think if actuaries had seen the writing on the wall they would have become the pre-eminent data mining cohort. But they didn't; the industry was too established, complacent and resistant to change. That's why I jumped into an Artificial Intelligence software startup in the late 1990s. The pressures of that startup meant that I had to build data mining processes. That's an important point to remember - thanks to advances in UX and usability most consumers now just think that you can throw in data and get results. But no software works like that. You don’t just throw in data, it needs to be massaged. I ended up inventing a process around the use of software that was distinguished by its ability to model thousands of dimensional spaces /co-variables. This was really different from conventional processes which utilised at most a few dozen variables. 

So you were in the startup world - how did you end up working for larger businesses?  

The beauty of being in a startup was that I was exposed to lots of different industries. Right from day one we were talking to potential clients about the value of an asset they all had - data. And of course the value of that asset changes depending on what you do with it, how you run it, how you twist it. I took those lessons and launched my own consulting business to teach companies how to manage their data. I did that from 2001 to 2011 where my main client was a top tier global consulting firm. They were ahead of most competitors as they took a bet on data early, and I helped them build a data mining practice that was the first of its kind in Australia that then went on to be a global leader. Right now I'm now working with PwC, helping them do something similar but better. 

OK so what is big data? 

A working definition is that it's more information than you can handle with traditional approaches, which means you have to do something different. It's not necessarily new though. For example I helped a large retailer look at every advertisement in their catalogues back in the 1990s and align those all with their advertising revenues, and then every single line item of every single sale over time. Does that mean we were doing big data 20 years ago? Another example - what about airlines in the 1970s and 1980s? They had to take data from 40 disparate systems to talk about the entire customer journey, from their marketing to ticket purchases, to check-in and then the flights themselves. That counts as big data by almost any definition you care to use. This stuff is not new. 

What is new is that thanks to Moore’s Law we are on an exponential curve. That curve is now so steep on measures not only of data volume but also data velocity that it’s on everyone's radar. That's why it's a buzzword. But it's been around for a long time. The other thing that's going on is that we're getting a much greater variety of data thanks to the incredible rise in our ability to store data on multiple devices. And then of course there's connectedness. We're in a world where about three billion people are online, and two billion of them are connecting via smartphones. Those are the first three Vs of big data (by another common working definition):  volume, velocity and variety. And if you think the growth curve is steep now wait until the Internet of Things (IoT) is in full swing. 

There are two other Vs as well. The first is veracity - how good is your data quality? Researchers know this, that it's not enough to just take measurements, you need to make sure those measurements are being taken properly. Admittedly this is contentious; there's a branch of data analytics that suggests you can overcome these problems even with poor quality data. But of course the better quality your data is the easier it is to manipulate. And finally, there's value. In the early days you had to convince people of analytics. That’s no longer the case. The world has changed. Today your decision making has to be evidence based. In the old days that was only really for scientists and geeks. Today it's everyone; even the marketers, who are now pointing to big data to drum up sales. 

Does that mean there's a gap between what people say big data is and what it actually is? 

Big data is mostly a marketer's term. I'd love it if people dropped the term big data and just used analytics. People are drowning in data and aren't sure what to do with it, and this has been the case for most of this century. For example let's say you're a big manufacturing company with thousands of sensors installed in your factories that are taking readings. That's easy these days because sensors are cheap. But even if you crunch all the data and use it to produce reports you’re still not using it to extract maximum value - you’re just looking backwards. One of data's big selling points is that you can use to produce real time information. But you have to ask yourself "how many organisations actually need that?" Sure it's useful for border patrol security, or air traffic controllers, or Amazon. But most ASX500 companies don’t need so much real time information. They just need enough information to make decisions about what they're going to do in a week or a month's time. 

That's why big data is so over-hyped.  Everyone says they're doing it but very few are. And those that are doing it aren't doing it very well. Information brokers or companies at the front of the information economy are probably the best at it - Google, Amazon, Facebook, Uber, AliBaba. They're doing it properly because it's built right into their business model. Everyone else has room to improve. And banks, telcos and airlines are comparatively behind pure information economy companies.  Sure they’ve got all the parts, such as the hardware and the people to do it properly. But often they’re looking at a bowl full of quality ingredients and saying "where's my awesome cake?" I don't think that's going to get sorted out until the need to get it right is embedded into the actual structure of a company. This is probably going to take another generation, and requires a very different mindset. 

It's not just companies that are doing big data though right?

Once you get out of the private sector it changes a little. Some parts of government are actually pretty good at big data because they need to be. The USA and Israel for example have a huge security apparatus that depends on getting their data analytics right. In some of the more classified areas it's really bleeding edge stuff - things like real time facial recognition, telephone conversation scanning or other intelligence flows. The reality is that private sector companies don't need to track millions of entities across a wide variety of sectors like the government does. They just need to analyse one sector. And right now they don't need to be perfect, they just need to be one step ahead of their competitors. Governments have a duty to protect and serve their citizens whereas the private sector just needs to make sure it's profitable.

The reason we all get hyped about big data is because we get glimpses of possibilities that are way out on a curve. The gap between the hype and the reality isn't because we can't do it technically, but because there just isn't a need to do it yet. For example, take the big banks in Australia. They're making solid profits, and really leading edge data analytics isn't necessary for their survival yet. That's different to something like microtrading on the stock market. When it started a few years ago there were only a few people doing it, but it gave them such a competitive edge that almost immediately everyone had to follow. It was a question of survival. Big data isn't there yet for the majority of real world business problems. That's why when you hear someone hyping big data solutions keep in mind that it needs to start with a real business problem. That's the first and most important step. Until that problem exists (and it needs to be compelling) then business won't do big data for data's sake. 

Where do you see big data (sorry, analytics) headed in the future? 

The IoT is going to be massive. Connected devices are going to create an even steeper curve. And they might force us into federated data models whereby you don’t own the data, you just have access to it. In my mind that's really interesting for the greater good of society. Right now data is regarded as an asset, so if you're in a commercial environment you don't share it. The concept of federated data is that it's semi private; so all owners go into a common pool where everyone gets to use it. A nice example of this is SENSE T in Tasmania which is being used to monitor everything from oysters to air quality.

In the US they're a little further down the road than we are; government has partially adopted federated data because they allow it. There's a difference between this and the open data movement because you still recognise that it's yours but you allow people to use it. Once you’ve got the IoT and you’ve got all the data out there as well as the cloud, it means individuals or organisations don't need to ever invest in hardware, you can just have access on demand. That opens up a world where people can use data analytics to really start pushing the envelope. It's not just cancer, weather or infrastructure. We're talking about societal change at large. It's possibly the next really big leap in our technological evolution, where we end up creating something that didn't exist before.

We've done this before remember - think about what we did with copper cabling. Our wires hit their theoretical speed limits a long time before we actually moved on. We pushed the physical limits far beyond what was possible using mathematical optimisation, which is what happens when you take your current model where it interacts with the real world (which is messy) and minimise the negative externalities. And greater efficiency has societal impacts and the holy grail of 'win-win' or 'create something from nothing.' For example, when UPX and Fedex used data analytics to do route optimisation not only did it save them money, but it helped citizens because it meant fewer carbon emissions and better customer service. Right now the planet is faced with huge inefficiencies. We're wasting food, destroying the environment, and creating unnecessary pollution. I think we should be saying "I want to do what I do better." Data analytics allows us to do that. 

What's the flipside? 

As we're now all too aware we might end up throwing civil liberties out the window. Facial recognition is a technology that's going to be ubiquitous in the future. We're going to end up in a world where you really cannot hide. Also, are we aware of what all this electro magnetic radiation doing? Will optimisation increase the disparity between rich and poor? What about warfare? History says we invent stuff and then make weapons out of it. And what about machines becoming self aware? Like anything this new technology has the power to bring people together but also to drive people apart. What if data is the currency is that's going to drive a wedge between us all?

So why should people care? 

Any discussion of the future of big data reminds me of Hoffstader’s Theorem:  everything always takes longer than you think, even when you factor in Hoffstader’s Theorom. In the very long term I think data is key to our evolution. Remember, we've already transcended space and time. We can create physical objects at a scale of nanometres; we can't see anything at that level with the naked eye and yet there are things down there that we've created with intentional design. And thanks to our machines we can now think in gigahertz. We're able to process thoughts or questions far faster than was physically possible just a few generations ago. Our brains operate in a space from milimetres to kilometres, and in time from a fraction of a second to a lifetime. But our machines create things at the atomic level, and operate in microseconds. 

For the first time in our history we are evolving with a purpose that goes beyond the Darwinian scale. After all, if you break it down a person is a program encapsulated in DNA, which runs in place and accumulates materials from its surrounding environment to eat, walk, procreate and think. Data is a key not only to our evolution but also our survival. You can't get off the planet without it, and it can overcome the biggest problems facing us. We're getting close. The future of analytics and data is so bright we need to wear augmented-reality shades. But it's not here yet. 

A report back on our talk at Link Festival 2015

Dr Angus Hervey

Last week we presented the opening keynote at Link Festival, a conference in Melbourne exploring the nexus of design, technology and social change. We had a great time; and the audience were super engaged. Here's how it all went down (through the eyes of social media)...

Earthrise

Earthrise, December 24th, 1968. This photo was taken on the Apollo 8 mission, which was the first manned space voyage to leave the gravitational influence of the Earth and orbit another celestial body.

Earthrise, December 24th, 1968. This photo was taken on the Apollo 8 mission, which was the first manned space voyage to leave the gravitational influence of the Earth and orbit another celestial body.

 

Humanity's curiosity about the heavens has been universal and enduring. Humans are compelled to explore the unknown, discover new worlds, extend the boundaries of our scientific and technical limits, and then push further. Curiosity, while unkind to our feline friends, is vital to the human spirit and the desire to explore and challenge the boundaries of what we know and where we have been has provided benefits to our society for centuries.

Space exploration helps to address fundamental questions about humanity’s place in the universe. Through confronting the challenges related to human space exploration we develop technology, create new industries, and help to foster a peaceful connection with other nations.

The science and technology required to transport and sustain explorers will drive innovation and encourage creative ways to address challenges. As previous cosmic endeavors have demonstrated, the resulting ingenuity and technologies will have long lasting benefits and applications.

However, measuring the progress gained from space exploration in dollars and patents is grossly inadequate.  In this moving video (below), Neil deGrasse Tyson argues passionately that we can’t truly understand our planet unless we leave it. He explains that following the famous 1968 Earthrise photo, taken during the Apollo 8 mission, humanity’s view of the world was transformed. Seeing the world as a whole, a home without man-made borders, ignited a new found respect and passion for the ecosystem that we all share. In the years immediately following Earthrise, the Environmental Protection Agency, Earth Day, the Clean Air Act, the Clean Water Act, the Endangered Species Act, and Doctors Without Borders were all formed. The impact seeing Earth as a unified whole cannot be overstated.

Come join us on November 13th at 6:30 for our Space Event to discover where we are in our cosmic quests and to celebrate earth as a unified whole, a view that can only be observed from space.

Future Crunch does Tassie

Future Crunch has just got back from a whirlwind trip to Tasmania. We had two great events and met some wonderful people. Thank you so much for everyone who came to support us and especially a big shout out to Tasman Quartermasters and Stuart Addison for taking such good care of us. 

We got a nice write up in Tasmania's Mercury newpaper over here.

We also managed to do a radio interview with ABC Hobart, below.

 

 

 

 

The genie's out the bottle: why the future is determined by technology laws

Dr Angus Hervey

21st August 2014
 

If you've been rummaging around the more geeky corners of the internet for any amount of time you'll no doubt have heard of something called Moore's Law. This owes its name to a guy called Gordon Moore who, back in 1965 published a famous graph showing that the number of 'components per integrated function' on a silicon chip (a measure of computing power) seemed to be doubling every year and a half. In practice this mean that computing power doubles once every 18 months. For many people Moore's Law is the backbone of any discussion around technological progress. Although it was originally based on only five data points it turned out to be an astonishingly accurate prediction, having now held true for almost 50 years. Today it's settled into an almost iron rule of innovation, and as the global economy moves into an era dominated by things that get done on computers its implications are becoming ever more profound. 

The reason people get excited about this stuff is that it means technological progress is exponential in nature. In 1969 the United States put two men on the moon. That mission required more than 3,500 IBM employees and the most sophisticated computer programs ever written. Today, the HTC Nexus One smartphone holds more computing power than all of that technology combined. They’re literally using these phones to launch satellites. And as we are fond of repeating at Future Crunch, our brains just aren’t wired to think exponentially.  We look at at our laptop and think that in ten years' time its going to be ten times better. But it won’t be – it’s going to double its computing power seven times in that period, so it’s actually going to be 128 times better. That means that in 2024 not only is your device way more powerful, it’s also cheaper, uses less energy and probably won’t be a laptop anymore. 

Image courtesy Trickvilla

Image courtesy Trickvilla

Ray Kurzweil (arguably the world's best known futurist) views Moore's Law as part of a longer and larger process of technological progress, stretching back a long way through time. Kurzweil claims that before the integrated circuit even existed, four previous technologies - electromechanical, relay, vacuum tube and transistor - had all improved along the very same trajectory. He formulated this as thLaw of Accelerating Returns; and his belief is that it will continue beyond the use of integrated circuits into new technologies that will lead to something called the technological singularity. If you're keen to go down the rabbit hole on this you won't find any shortage of material out there. Just be prepared to kiss goodbye to many hours of your life staring at the internet. It's also worth pointing out the common (but mistaken) belief that Moore's Law makes predictions regarding all forms of technology, when it was originally intended to apply only to semiconductor circuits. A lot of people now use Moore's Law in place of some of the broader ideas put forth by Kurzweil. This is a mistake - Moore's Law is not a scientific imperative, but rather a very accurate predictive model. Moore himself says that his predictions as applied to integrated circuits will no longer be applicable after about 2020, when integrated circuit geometry will be about one atom thick. That's the point at which Kurzweil says that other technologies such as biochips and nanotechnology will come to the forefront to move digital progress inexorably forward.

That's not what this blog post is about though. We're more interested in future crunch - the kinds of things you start to see happen when multiple technology laws begin to take hold at the same time. I use this term loosely, since most of these 'laws' are actually predictions or observations of patterns. And what's interesting is just how many of them are out there. In fact, it turns out that Moore's Law is just the tip of the iceberg. Koomey's Law for example, states that the energy of computation is halved every year and a half. This less well known trend in electrical efficiency has been remarkably stable since the 1950s - long before the microprocessor was invented. Moreover it's actually faster than Moore’s law, since the number of computations you can perform per unit of energy used has been doubling approximately every 1.57 years (as opposed to the number of components on silicon chip, which has been tracking closer to 2 years).

Image courtesy Lorenzo Todi

Image courtesy Lorenzo Todi

Kryder's Law is the storage equivalent of Moore's; it states that our ability to cram as many bits as possible onto shrinking hard drives is also doubling roughly once every 18 months. In 1980, Seagate introduced the world's first 5.25-inch hard disk drive (remember floppy disks?) which could store up to 5 MB of data at a price-tag of US$1500. Today, 34 years later, you can buy a a 6000 GB drive from the same company for $600. That represents a million-fold increase in capacity, combined with a seven-fold decrease in price (accounting for inflation). Not even Moore's silicon chips can boast that kind of progress.

In the field of biotechnology, advances are also outpacing Moore's. In 1990 the US government set out to complete one of the most ambitious scientific projects ever undertaken - to map the human genome. They committed more than $3.5 billion, and gave themselves 14 years to complete it. Seven years in, they had only completed 1% and were through more than half their funding. The government and sponsors started panicking. Yet by 2003, the Human Genome Project was finished ahead of schedule and $500 million under budget. This was made possible by exponential improvements in genome sequencing technology; past a certain point they even started outpacing Moore's Law. This kind of progress is astonishing when you think about it. It cost 3 billion dollars and took 13 years to sequence the first human genome. Currently, it takes less than a day and by the end of this year it’s predicted cost less than $1000.

Image courtesy IEEE

Image courtesy IEEE

Communications technologies are also progressing exponentially. For example, if you look at the number of possible simultaneous “conversations” (voice or data) that can theoretically be conducted over a given area in all of the useful radio spectrum, it turns out these have doubled every 30 months for the past 104 years. This observation was made by a guy named Marty Cooper, probably the most influential man nobody has ever heard of. He's the father of the mobile phone; the modern day equivalent of Alexander Graham Bell. While working for Motorola in the 1970s he looked at the cellular technology used in carphones and decided that this ought to be small enough to be portable. Not only did he conceive of the mobile phone (citing Star Trek as his inspiration) he subsequently led the team that developed it and brought it to market in 1983. He's also the first person in history to make a handheld cellular phone call in public. 

Image courtesy Daily Beast

Image courtesy Daily Beast

'Cooper's Law' is even more remarkable than Moore's Law since it's held true since the first ever radio transmission by Marconi in 1895. Radio technology a century ago meant that only about 50 separate conversations could be accommodated on the surface of the earth. The effectiveness of personal communications has improved by over a trillion times since then. And unlike Moore's Law, there's no physical limit since there's no limitation on the re-use of radio spectrum. These networks can be expanded indefinitely by merely running more lines, more bandwidth, to more terminals. 

Running side by side with this theoretically unlimited increase in wireless capacity is an exponential increase in the amount of data we can transmit through optical fiber cables. Butters' Law says that the amount of data coming out of an optical fiber doubles every nine months, meaning that the cost of transmitting a piece of data bit an optical network decreases by half during the same time period. Unfortunately, that rate of progress doesn't filter down to us as consumers - Nielsen's Law states that the bandwidth available to average home users only doubles once every 21 months. Still though, it's an exponential function, and is the reason why telecoms companies have been able to make so much money while still bringing down the cost of data traffic. Anyone remember dial up modems? Imagine trying to stream today's HD videos on something that still sounds like this.

So what happens when you get increased connectivity through improved data transmissions and falling costs? Well, you get larger networks; and according to something known as Reed's Law, the utility of large networks (particularly social networks) increases exponentially with the number of participants. The precise size of that increase by is a topic of debate - Metcalfe's Law for example, states that the value of a telecommunications network is proportional to the square of the number of connected users of the system. In other words if your network is 10 people, then its value is 100. But if that network then doubles to 20 people its value goes up by 4 times, to 400.

This idea was formulated in the 1980s and was based on connected devices such as computers, fax machines or telephones. Today though, critics point out that this is too simplistic since it measures only the potential number of contacts, (the technological side) whereas the social utility of a network like the modern day internet depends upon the actual number of nodes in contact (the social side). Since the same user can be a member of multiple social networks, it’s not clear what the total value will end up being. Research around Dunbar's Number also suggests that there's a limit to the number of connections our brains can manage - most people don't want or need networks larger than 150 or 200 other people. Very large networks pose a further problem, since size introduces friction and complicates connectivity, discovery, identity management and trust, making the effect smaller.

Image courtesy TrialPay

Image courtesy TrialPay

However, I'd argue that the exact proportion doesn't matter. All that matters is that the network effect is exponential in nature, i.e. a doubling of a network's size more than doubles its value. And when we take the effects of this into consideration along with Cooper's, Butters' and Nielsen's Laws we start seeing exponential increases in the ease and access with which people can connect and use such technologies. For example, there are now more mobile phones used by Africans than Europeans or North Americans, despite those two other continents having more than 20 times Africa’s per capita incomes.

This means that the so-called digital divide may disappear far more quickly than most people realise; it's why organisations like Quartz get so excited about the mobile web and their Next Billion events. And it's why tech giants like Google are pushing hard on things like Project Loon - the sooner they can get everyone connected, the more value they derive from the exponential increase in the size of the global internet. Even more interestingly, it may be that it's the tail wagging the dog - Silicon Valley investor Steve Jurvetson thinks that the shape of these laws only makes sense once you substitute 'ideas' for participants. In other words, technology is driving its own progress by steadily expanding its own capacity to bring ideas together. The implication is that the genie's already out the bottle; short of arresting half the planet's people, we couldn't stop the march of increased connectivity even if we wanted to. 

What about other forms of technology that don't involve computer chips and phones? Well, if you've been following renewable energy trends for any time you'll be familiar with Swanson’s Law. This states that the cost of the photovoltaic cells needed to generate solar power falls by 20% with each doubling of global manufacturing capacity. It's represented below, in a now famous graph showing the fall of solar costs in the last forty years. Today the panels used to make solar-power plants cost US$0.74 per watt of capacity - a 60% decline since 2008. Power-station construction costs add about $4 to that, but these, too, are falling as builders work out how to do the job better. And running a solar power station is cheap because the fuel is free. Obama's former Energy Secretary Steven Chu says that solar becomes price-competitive with fossil fuels at a cost of around $.50 per watt. Swanson's Law means we're pretty much already up and over the tipping point when it comes to making decisions about whether to build new power stations using renewables technology rather than fossil fuels. 

Image courtesy The Economist

Image courtesy The Economist

Then there's Haitz’s Law which states that every decade, the amount of light generated per LEDs increases by a factor of 20, and the cost per lumen (unit of useful light emitted) falls by a factor of 10 for a given wavelength (color) of light. In other words, we are seeing exponential improvements in LED technology, which means it's becoming the dominant way we produce light. At home we get brighter more efficient lighting at lower costs, while commercially it means that LED lighting can now be used for more specialized applications such as in large stadiums and amphitheatres. This results in lower electricity consumption, and a reduction in overall carbon emissions and the use of toxins used in old lighting technology such as mercury. It's no surprise that these kinds of technologies are improving so rapidly - they all run on silicon, the foundation for the semiconductor materials found in computers and communications networks.

So what does it all mean? Well, one of the difficulties in making predictions about the future is that it's almost impossible to know what people will actually do with new technologies. It's easy to predict a fall in the costs of computing power; it's far more difficult to predict that this will lead to things like the sharing economy or the rise of mobile personal technologies. That said, there are a couple of things that are possible to see on the horizon as a result of these converging laws. However, you'll have to wait until the next blog post to hear what those are (I know, I know. But one thing at a time right?). For now perhaps time to start keeping an eye out for other kinds of interesting technology laws out there? For better or for worse, science and technology have unleashed patterns and forces over which we have no control. Far better to understand and talk about them, in order to harness their power, than to stand by and let them wash over us, like islands in the stream. 



Future Crunch event tonight - The Autonomous Corporation

We've been following the work of a guy called Steve Sammartino for a while now. A website which does the marketing for his speaking gigs describes him as "a modern day renaissance man who has worked in marketing for the world’s largest companies, founded and sold his own start-ups, is a business journalist, and thought leader in the start-up & technology arena." That doesn't really do him justice though - a better description might be a 'digital polymath.' We love his work, especially his recently published book, The Great Fragmentation which is as good a summary of the digital age as you'll find out there right now. 

We've managed to convince him to do an event for Future Crunch, which is great! It's happening tonight, at the Library at the Dock in Melbourne's Docklands. The topic is fascinating - he's looking at something called the "Autonomous Corporation." As far as we understand it, the idea is that we're at the point where we've probably reached 'peak corporation.' Nobody wants corporations anymore - they've gone from something that originally was built to outsource risk and take advantage of economies of scale, to something that exerts an outsize influence on our lives and which damage our environment as a result of the hyper-capitalism of the last 20-30 years. 

However, we're now at the point where a combination of digital tools (e.g. cloud-based software platforms, crowdfunding, mobile marketplaces) and technological advances (the outsourcing of traditional 'left brain' jobs to machines) offer the promise of getting rid of the corporation as we know it. This is provocative stuff. An economy without giant corporations is almost unimaginable right? Well, to a certain extent yes. But it wasn't always that way. It's only really in the last 50 to 100 years that the modern corporation as we know it has been the dominant unit of the global economy. Before that, most people were 'self-employed' in the sense that they had a specific set of skills and experiences that they could use either to survive or which they could trade in return for other skills or goods. 

Sounds familiar? It should... it's remarkably like the descriptions we now hear of things like the gig and sharing economy. If you zoom out far enough in history, it may be that the age of the corporation was largely an anomaly - and that in twenty years time the services they provide (e.g. legal risk and economies of scale) won't necessarily require something that has a fixed geographical location or even a physical form. We'll leave it at that. If you want to hear more you should definitely get down to the event. It's free! There will be tea, beer and mulled wine (yes, the weather outside is grey but inside we'll be warm).

Also, here are some of the links we've been bouncing back and forth with Steve while he's been thinking about this talk. If you're planning on attending this event, and you're looking for more info on the topic either before or afterwards, this is your "More Reading" section.

Vox - Silicon Valley's Latest Management Craze: Explained

The Atlantic - The Freelance Surge is the Industrial Revolution of our Times

Bitcoin Magazine - Boostrapping a Decentralized Autonomous Corporation

Fast Company - How to Find Skilled Workers in the Gig Economy

Alternet - Cut-Throat Capitalism: Welcome To the Gig Economy

 

 

Aircraft technologies in 2040

Had to post this one up. Just because you're doing futures thinking, doesn't mean you'redoing it right. Exhibit A: a series of videos from BAE Systems looking at possible future fighter aircraft technologies. Apparently, they got a whole lot of experts from their R&D team, and put them together with 'the UK’s leading aviation thinkers from universities, government, and a whole range of companies; to predict and explore how aircraft engineering might evolve in the next 20-30 years. 

Sounds impressive right? Its's not. Instead, it's pretty much one long adolescent fantasy for oversexed military tech heads. The ideas they've come up with are laughable. My seven year old self says "holy crap, energy weapons woohoo!" My grown up self is going, "so hold on, you're telling me that of all the possible ways you could use technology this group of people have decided they want to kill more people more efficiently?" 

What's worse though is that they've failed rule number one of futures thinking - nothing happens in isolation. The nature of the military will change radically by 2040. The notion of using huge, bulky aircraft to conduct long range search and rescue missions or to shoot other bulky aircraft out of the sky will seem laughable by that stage - it's like the oft-quoted example of city planners at the end of the 19th century who worried about the amount of manure that horses would produce in the 20th century. You can't just extrapolate forward based on current technology - you need to think about how the entire system will change. I'm no expert in military futures, but even I can see that the way things are heading is towards smarter, small scale engagement, cyber-warfare and swarmbots and other forms of robotics that will one day make these videos a laughing stock. 

Also... is this really the best they could do with the graphics? It's like watching a bad version of Tron (the original). This is one of the most advanced aerospace companies in the world, with sales of around $30 billion in 2013, and profits of almost $1 billion. You'd think they could hire a half decent animator. Check out the list below for their proposed "aircraft technologies in 2040"

3D printers so advanced they could print UAVs during a mission

Aircraft parts that can heal themselves in minutes

A new type of long range aircraft which divides into a number of smaller aircraft when it reaches its destination

A directed energy weapon that could engage missiles at the speed of light, destroy them and protect the people below

hat tip to Kurzweil AI for the original blog post

Virtual reality applications - about to hit the mainstream

Been picking up some really interesting developments in the last few days in the virtual reality field. It's amazing what Facebook's purchase of Oculus did - almost overnight we saw an experimental technology move into the mainstream, first in gaming circles and now into the general technology media. Within gaming, the advances have been rapid. I noticed this report, for example, from the E3 Conference by a journalist who'd just experienced a demo game using the new Oculus Dev Kit 2.0 (DK2). Despite being an experienced user of the first Dev Kit (DK1) he's almost breathless as he describes the improvements in motion blur, tracking and user interfaces. 

"A huge smile crosses my face. Not only have I not experienced anything outside of a first person experience on the Rift, but I wasn’t sure a 3rd person experience could be done well. Forgetting completely that DK2 has solved positional tracking, the natural inclination to lean in and check out the environment kicks in. I lean closer to the cute squirrel character and my mind is blown for what feels like the hundredth time since I first experienced the Rift. For the first time instead of being reminded that I’m not there in that cartoony world, my view leans with me. I start looking around the rest of the world pushing my face closer to objects, having them feel like they’re centimeters from my face."

Image courtesy NZ Gamer

Image courtesy NZ Gamer

After a pretty fun description of the game he's playing, called Lucky's Tale (involving what sounds like a successor to Sonic the Hedgehog) he concludes with the following words, "by the end of the demo I’ve shaken my head numerous times in disbelief and the muscles in my face are starting to hurt from the smile that’s determined to reach each ear. Not only has this one tech demo proven that without a doubt there is more to VR than first person experiences, but the numerous demos on the show floor seem to be wowing everyone and proving that people really are ready for Virtual Reality."

...

The other development I thought was really interesting was this report by the always excellent MIT Technology Review, in which they talk about the rise of what's called virtual documentary making. The article focuses on the work by a documentary maker called Nonny de la Peña, a former classmate of Oculus Rift creator Palmer Lucky. They both studied together in 2012 at the Mixed Reality Lab at the University of Southern California Institute for Creative Technologies. She's spent the last seven years trying to prove that VR will change journalism, since she believes it will offer offer a novel and compelling way to communicate and inform. Well, um yes! A doco on Syrian refugees is going to be a whole lot more visceral if you're actually walking through the footage itself.

Nonny de la Peña with VR headset Image courtesy MIT Technology Review

Nonny de la Peña with VR headset

Image courtesy MIT Technology Review

Apparently, she's managed to get to the point where it's now possible for viewers with virtual-reality goggles to have a wide field of view and freely walk around the environment of the documentary, which is rendered in 3-D. They are free to choose where they look and move but are unable to affect the linear nature of the nonfiction narrative. As la Peña explains, "I start with eyewitness video, audio, and photographs and then carefully reconstruct an event with high-end animations, environment models, and spatial soundscapes to create a first-person experience of the events.”

What both these articles suggest is that virtual reality environments are coming really fast - they are going to hit the mainstream far quicker than anyone expects. Inside niche tech circles this is now common knowledge, but for the vast majority of media and entertainment companies it's still a novelty. When you think about it, console gaming has been around in its current form since the 80s and early 90s. The biggest advancement in the last twenty years was the Nintendo Wii, but it didn't turn out to be quite the killer app that so many expected. And in the larger media world, cinema is still in a form pioneered a century ago (I still think 3-D is a bit of a gimmick, and doesn't fundamentally change the cinema experience). 

Oculus (and virtual reality technologies more generally) disrupt both of those industries completely. And that's just the start. As I've said a number of times before about this technology... watch this space. 

Self-driving trucks (and Jean-Claude Van Damme)

Image courtesy Engadget

Image courtesy Engadget

Every time I see pictures of trucks these days I can't help think of the Volvo ad with Jean-Claude Van Damme doing splits in mid air. On the face of it this is a good thing, since it makes trucks suddenly seem a whole lot more interesting. On the flipside it means every time a 16 wheeler barrels past me on the highway I'm left with the unfortunate after-image of JCVD's make-up caked face leering at me in my imagination. 

Anyway... I digress. Spotted this really interesting piece of news about Mercedes' new plans to develop a semi-autonomous truck. Seems that Google and Volvo's efforts in the area of autonomous vehicles are starting to filter into the wider motor industry. Apparently, they've just performed a demonstration of what they're calling Future Truck 2025, a combination of radar and stereo cameras keep the machine on the right course once it's up to speed, freeing the driver to check up on the family or get work done. It can optionally talk to other vehicles to anticipate upcoming construction or traffic jams, and it's smart enough to get out of the way if an ambulance comes speeding by. According to the press release, " Autonomous driving with long-distance trucks will be a reality in ten years time."

In the meantime... thank god we have Jean-Claude.