Articles

Bias-debasing Bayes

By Jørgen Tresse

 

“Prediction is very difficult, especially if it’s about the future.” – Niels Bohr.

We, as individuals and as a species, make predictions about future events all the time. Yet we keep getting many of them wrong, and it often seems like we’re unable to improve our predictive abilities. I present here a roadmap to uncertainty, risk and the failure of predictions, hoping to leave us all a bit wiser regarding this everyday activity.

Cognitive biases

First a word on the difference between risk and uncertainty. A risk is something you take when you know the probability of different outcomes. Uncertainty is what you have when you don’t know the odds of different outcomes. Walking home from work today, you take a chance of dying in a traffic related accident. However, you know that the risk of this happening is low, so you deem it an acceptable risk for walking home. Compare this to the fear many have of flying, or of terrorism. These risks are certainly much lower, but there are so many uncertainties involved, that the perceived risk is higher. This leads us to adopt anti-terrorism measures much more quickly than traffic safety measures. This in turn says something about how we perceive the probability of future events happening – in other words, our predictions about the future.

In addition to perceived risk, we have other cognitive biases – failures, if you will – that affect our ability to clearly and accurately predict outcomes of events. Take for example availability bias. Many studies suggest that we have an easier time remembering and drawing upon things that we are more exposed for, in uncertain situations. This makes sense, but it skews our predictions. Survivorship bias is another one, which is a sampling bias based on only looking at the survivors of an event. During World War II, Abraham Wald famously helped the American Navy build sturdier airplanes. Before Wald, they were reinforcing planes based on where returning planes had been hit, not realizing that they were reinforcing them where a plane could sustain damage and still survive the trip. Wald recognized that their samples consisted only of survivors, and they had hence failed to consider where fallen planes had been hit. Reinforcing the planes instead where the survivors had not been hit, drastically reduced the number of fatalities.

Furthermore, we have the gambler’s fallacy – the belief that just because something has had the same outcome many times in a row, the outcome is bound to soon change. The chance of a coin toss coming up heads is 50%, regardless of how many times you have thrown heads in a row. When simulating coin tosses – that is, writing down what is considered a reasonable result of coin tosses without actually tossing them – people have been found to write down too short streaks. After maybe four or five heads, they feel that they have to switch to tails, even though with real coin tosses you can easily get eight or more heads before tails show up. Humans are very good at finding patterns in data, and reacting accordingly, which gives us many evolutionary advantages. As many a gambler will have experienced, however, being good at finding signal in the noise does not always work to our advantage.

The unknown unknowns

All these biases and pitfalls can be summed up as snap judgements – cognitive heuristics, or shortcuts, that allow us to make decisions quickly, without having to stop and consciously process all the information we are bombarded with everyday. The Nobel laureate Daniel Kahneman, along with his collaborator Amos Tversky, is one of the best-known scientists within the field of heuristics and biases. In his 2011 book, Thinking, fast and slow, he lays out the differences between two “systems” we all have – system 1, which calls all the snap decisions, based on heuristics we already have discussed, and system 2, which consciously processes information before acting on it. When it comes to making predictions about future events, we could all benefit from slowing down, recognizing our blind spots, and putting system 2 in charge.

Of course, recognizing our blind spots requires us to be aware of them in the first place – so-called known unknowns. Former US Secretary of Defence Donald Rumsfeld also gave us two other “knowns”: known knowns – which is simply what we’re aware of that we know – and unknown unknowns, which is the real kicker. You can’t correct a prediction for unknown unknowns, because, well, you don’t know what to correct for. Yet being aware of the fact that there are unknown unknowns, is a giant leap for making more accurate predictions. A famous study by Philip Tetlock found that experts were often no better than amateurs at predicting future events, even though they stated their predictions with great confidence. This is in part based on the wisdom of the crowd, which experts almost by definition try to stand out from, but also because experts like to make predictions in others’ fields (catchily named ultracrepidarianism). This leads many experts to disregard common sense, in a way inadvertently creating their own blind spots, without even realizing it. Just being aware of your blind spots, so to speak, allows us to counteract some of their effect. My proposition? Think probabilistically.

Predicting the unpredictable

One of the great proponents for increasing the accuracy of predictions is Nate Silver. He started the political website FiveThirtyEight, and correctly predicted the results in 49 out of 50 states in the 2008 US presidential election. This improved to 50 out of 50 in the 2012 election, solidifying Silver and his team as top forecasters in the game. In statistics, the reigning paradigm for more than a half century has been testing a null-hypothesis (which posits that there is no relation between the variables you are examining), and disregarding it if a certain value passes a critical threshold, thereby strengthening your belief in there being a relationship between said variables. (All statistics nerds, please disregard my oversimplification of the method.) While a mathematically sound way of finding correlations, it has a few shortcomings.

Firstly, the critical value may in some cases seem arbitrary, and indeed it is. The critical value simply represents how accepting you are of being wrong. Second, your results really only tell you something about your sample of the population. If we could do the impossible task of testing every individual in the population, there would be no need for the prediction in the first place, so you are always operating with a part of the whole. Thirdly, as the more savvy of you will have noticed, I use the word “correlation” instead of “causation” for a reason. A correlation does not ensure a causation, and that leads me to the final point: statistical generalization is good for saying something about the, but not always a good predictor of the future. For predictions we’re better off using another tool, which has gained a lot of traction lately: Bayesian statistics.

Bayesian statistics, named for Thomas Bayes, encourages looking at the world through Bayesian probabilities, which is simply the act of assessing the chance of an event based on the chance of it occurring and your prior expectation of said event occurring. If said event occurs, you update your prior to fit the new data. Sounds intuitive? Bayes reportedly thought so little of his findings that he didn’t even find it worthwhile publishing. Imagine you test positive for a rare disease. Your doctor tells you that it’s correct 99% of the time. So how likely do you think it is that you have the disease? This depends on your prior, which is how likely you thought it was that you had the disease in the first place. If the disease is sufficiently rare, even a 1% error margin will amount to many people getting a false positive, so maybe you don’t need to be so worried. Of course, if you have a second, independent test that also turns up positive, you can be quite sure that you indeed have the disease. Your prior in this case is not the chance of having the rare disease (say 1/1000), but the chance of having the disease and having been tested positive for it before.

Updating your probabilities for future events after an event happens seems like a no-brainer. As they say: once bitten, twice shy. However, this may lead us to premature conclusions. The most difficult part of prediction is figuring out your priors, which is hard to do post-event. This leads us back to our cognitive blind spots. After being hit by lightning you might never go out during a thunderstorm again, even though the chance of being hit is small. Your experience trumps your prior, and suddenly a one-off event defines how you interact with the world. Keeping in mind that events have a probability of happening, and that the happening of an event should only make you update your belief in the probability of it happening, we might all make both more accurate forecasts, and keep the door open for a discussion about events, the future, and the truth.

So how did Silver’s FiveThirtyEight do in the 2016 election? Well, they missed the fact that Trump would win, but they gave him a much larger chance of winning than most other forecasters. As election day rolled around they had Trump at around a 30% chance of winning. This would imply him winning three out of every ten elections, which is far from an assured loss. Having a grasp of probabilities and error margins, this led to the team at FiveThirtyEight to not be surprised by his victory. Understanding probabilities like these is something we do everyday, even if we’re unaware of it. If you predicted that three out of every ten times you went outside, you would get hit by a car, you would probably start staying indoors.

But hey, what do I know? I’m no expert. And perhaps that’s for the best.

The death of the American Dream

By Sondre Jahr Nygaard

Wealth tends to accumulate. Growing inequality of income is threatening the order of society and is one of the biggest failures of our time. The mechanisms that are contributing to this increasing inequality can be seen on all levels: from inequality between countries to the same disparity between neighbourhoods in a city. The idea of the American Dream is that every person has the potential to realize his or her aspirations through hard work. The American Dream is not limited to the United States. It has become a widespread mode of thought in Western European countries as well. Despite the fact that these values are widely embraced, the map is very different from the terrain.

Some years ago, the book “Capital in the 21st century” by French economist Thomas Piketty created a big stir among policymakers and researchers alike. Piketty showed that the income gap between the rich and the poor is ever widening. The richest 1% are accumulating much more wealth than the rest can make through labour efforts alone. This results in a cycle where those who are very rich have the means to accumulate even more wealth at the expense of others.

An important aspect of the American Dream is upward social mobility, making a better life for yourself and your family. Using statistical analysis, the economist Raj Chetty claims that mobility between classes is not just a matter of personal skills, but also a matter of geography. Chetty says that the chances of moving between social classes are highly dependent on which neighbourhood you are living in. He explains it as the importance of the environment around you. Neighbourhoods in different social strata vary in the quality of their schools, education levels of residents and quality of life in general. Can one therefore say that people from lower and higher class backgrounds are equal and have the same opportunities to succeed? Higher education is a good indicator of upward social mobility. With higher education, you have access to a wider range of jobs with better pay and increased cultural capital. Research shows that the more you are exposed to books from an early age, the more likely you are to undertake higher education. University entry, though, is dependent in part on results from earlier education. Higher education can also be expensive, such as in the UK or the US, which may create barriers for potential students coming from low-income families. Evidence in Norway however suggests that parent income is not as important for choice of education, where the government provides free education.

While Chetty’s studies are based in the US, similar trends can be seen by looking at cases in Norway. If you were born and raised in Finnmark, for example, the likelihood that you will undertake higher education and move to a social class higher than your parents is much lower than for someone born and raised in Oslo. The proportion of people having more than four years of higher education in Oslo is about 19.3 percent, the highest in the country. In Finnmark, this amount is 5,9 percent. Combine this with the flow of people moving from Finnmark into urban areas further south, and the future of Finnmark might look grim – as is the case for any other sparsely populated place in Norway.

But what if we break these numbers down on a local level? Are you destined to a have a future in society’s elite if you are born and raised in Oslo? Well, that depends largely on which part of the city you are born and raised. Though the data are a bit rough, the trend is still clear: education levels vary considerably among neighbourhoods in the city. In Stovner, the least educated part of the city, the population with higher education is approximately 23 percent. At the opposite end of the scale, St.Hanshaugen´s higher educated population is almost three times greater, with over 60 percent of residents having higher education.

Ever-increasing inequality in our society is not just a problem in and of itself, but it also leads to more distrust between people, and between people and the government. Distrust may lead to people abstaining from paying taxes, such as we have seen in Southern European countries, or civil unrest and subsequent police brutality, like we have seen lead to the black lives matter campaign in the US. Trust is among the critical pillars for a democracy to function well. Norway enjoys a high degree of trust, but this trust does not come from nowhere. It comes in part from giving people chances and opportunities to make use of their skills and potential.

To solve the problem of inequality, we cannot sit back idly and hope for a resolution to come. We need to enact policy which can work to minimize extreme income inequality. The mechanisms which strengthen this inequality exist on global, national and local levels, which calls for a holistic perspective on the problem. Achieving the American Dream may prove extremely hard, if not impossible, if you aren’t born in the right place at the right time.

Identity crisis

By Emilie Skogvang

Earlier this year, Innovation Norway and the UN agency UN Women signed an innovation agreement that will promote gender equality by exploring the possibilities in new technologies. The purpose of the partnership is to connect the challenges identified by UN Women to Norwegian solutions that can help empower women all over the world.

Double jeopardy

UN data shows that female refugees are less likely to survive than male refugees. Women who are unaccompanied, pregnant or elderly are especially at risk. Girls and women are more likely to be left behind in humanitarian crises, and they represent a big part of the total number of people who do not have access to identity papers. When women cannot document their identity, it makes it impossible for them to acquire basic health- and financial services. They will not get residence permit or work permit, access to education or health services, and basically not be able to take part of the society or get help in any way. UN data also shows that there are 1,5 billion people in the world who are not included in the financial infrastructure because they do not have access to identity papers. Being able to prove their identity, women will be empowered to take charge of their own lives Today, women are left in a double jeopardy, not only being refugees, but also being women with fewer possibilities and rights than male refugees. As a result of the innovation agreement between Innovation Norway and UN Women, some of the greatest Norwegian and international tech-minds, creatives and social forces will gather in Oslo in May 2017 for a so called 36 hour hackathon with the aim to “crack the code” to come up with a solution to the issue of identity through blockchain technology.

The blockchain revolution

What is blockchain technology? You have probably heard about the virtual crypto currency Bitcoin. And no, it is not just money for nerds. Bitcoin is a digital currency based on blockchain technology, which blockchain enthusiasts claim will revolutionize the economy. A blockchain is a decentralized, distributed database with built-in validation. The blockchain is a ledger of records arranged in data packages or blocks that use cryptographic validation to link themselves together. The blocks form a chain, and each block has a timestamp and a link to the previous block.

What is so interesting, and some would say revolutionary about blockchain, is that it eliminates the third party middle man from all transactions. In the case of Bitcoin, this means that the banks do not have to validate the transactions because the blocks are self-validating and totally secure. The blockchain ledger is not stored in one location like a bank, but is distributed and public. The block validation system ensures that nobody can tamper with the records. Old transactions are preserved in the ledger, and new are added irreversibly. Anyone in the blockchain network can check the ledger and see the exact same transaction history as everyone else. However, blockchain is a technology which is not limited to financial transactions. It can also be applied to a wide range of areas including digital proof of identity.

Taking refuge in blockchain

In the humanitarian context, blockchain technology can be used to build up a digital proof of identity. The blocks in the chain can validate each other without intervention from intermediates. In cultures where men control women’s financials, this can liberate women in many ways. Blockchain makes it easy and safe for anyone to build and maintain immutable and secure personal records and to transfer digital assets directly without intermediates or additional costs. In this way girls and women in humanitarian crises will be able to have safe records of important documents that are needed to rebuild their life and participate in economic activities after a crisis. If this project succeeds, it can empower refugees all over the world to take charge of their own lives and get the medical and financial aid they need.

We believe that technological innovations – so called blockchain technology- can contribute to giving women the help they need in a crisis. Many refugees fail to prove their identity, and by giving women the chance to identify themselves, they can be included in the financial infrastructure. It will make it easier for the people giving and receiving help, and also for women’s work – and financial possibilities after a disaster, explains Ingvild von Krogh Strand, responsible for the blockchain project in Innovation Norway.

Author’s note: This article was written in April, about three weeks before the hackathon took place.

CSI: Maleficio


B
y Joar Kvamsås

Serial, The Jinx, Making a Murderer. The last few years has seen an upsurge of documentary serials that deal with the complexities of figuring out what and who perpetuated crimes. And how easily eyewitness evidence can be forged, falsified, twisted and in general unreliable. Oftentimes, the question of guilt comes down to the opposing stories of a witness and the accused. What do we do in cases where the crime is too serious to ignore, but the truth about it is hidden from us?

The modern day response is to hope that forensic science can give us the answer. Prosecutors havealready for a few years lamented the supposed”CSI-effect”, claiming that shows such as CSI:Crime Scene Investigations have shaped the public imagination and made jurors expect widelyavailable and technically advanced forensicevidence in order to convict. With its fingerprints, ballistics identification and DNA evidence, the sterile and futuristic forensics lab can apparently provide the objectivity and accuracy that eyewitness testimony lacks.

The problem of whether to trust testimony is not a new one, and neither is the impulse to seek the truth by some impersonal and objective force. Perhaps the most infamous of these was the medieval practice of Trial by Ordeal, in which the accused were put through some kind of painful and wounding treatment, such as walking across red-hot ploughshares, or walking a number of steps while holding a red-hot iron. Interestingly, the ability of the accused to withstand this torturous treatment was rarely the factor deciding guilt or innocence. Rather, these trials functioned more like empirical experiments; one common practice was to bandage the burn wounds, and guilt was determined by inspecting them three days later to see whether healed healthily, or were infected.

In his book Strange Histories the British historian Darren Oldridge emphasises how medieval belief systems were not, contrary to common perception, dominated by irrationality and hysteria. Rather, he claims, people of the medieval period were both moral and reasonable – the difference lies in the baseline axioms that made their world view so different from ours. The first axiom entailed that truth could be found by the authority of ancient texts, the most prominent being the scriptures of the Bible. Secondly, it was assumed that the world was inhabited by a mass of invisible forces, including spirits, demons, witchcraft and curses. In a magic-induced universe ruled by an omniscient and interventionist God, trial by ordeal was not an unreasonable practice.

Trial by ordeal was usually a last-case solution where no other testimony or evidence was available than that of the accused or the accuser, such as sexual infidelity or heresy. It was not uncommon for the accused themselves to request trial by ordeal, in order to prove their piety and innocence. While trial by ordeal is nowadays most often associated with witch trials, they were actually rarely used in these cases. Instead, confessions were usually obtained through torture. Ironically, those accused of witchcraft would probably have had better chances of acquittal had they undergone trial by ordeal instead. Interestingly, the practice of trial by ordeal was not ended because of a changing understanding of truth and reality in the medieval period. Rather, its outlawing was based on the axioms that inspired it: Holy scriptures did not actually sanction judicial ordeal, and the church declared that no court could demand a miraculous result from God. What then about the axioms that leads us to increasingly rely on forensic science in modern court rooms? The chemistry sets and apparatuses of the forensics lab hold the promise of being everything that human testimony is not: Objective, unbiased, not open to interpretation. In recent years, many of the most used forensic techniques have come under scientific scrutiny, including fingerprint matching, the practice of matching bullets to firearms, and burn patterns in fire investigations. A 2009 report by the National Academy of Sciences in the US claimed that among the most common techniques in forensic science, the only methods that have passed the standards of modern scientific scrutiny are those that have been recently developed, such as DNA testing and drug screenings.

Much like the medieval condemnation of trial by ordeal, modern criticisms of forensic techniques is not brought on by a paradigm shift in how we think about truth and judgment. Like the medieval Christians, we do not reject our source of truth and its nature – rather, we acknowledge that our power to reveal the truth by manipulating the world is a lot more limited than we would like.

 

Moving out of the ivory tower

By Henrik Andersen

Science is in crisis. Despite an enormous increase in productivity, and millions of articles published every year, science is failing one of its core principles: Objective and verifable truths. “The free play of intellects” is not what pushes science forward, it is rather a threat to science. In order for science to be saved, it must open up to the general public and be humble about its limits and its accountability to society. At least, that is the message in Daniel Sarewitz’ widely read “Saving Science” (2016).

The success of science

For the past few centuries, science has provided society with huge technological progress. From vaccines and medicine, to ever more effective ways of transportation. Science is today so entangled in society, that it is hard to talk about science and society as to separate entities. Almost every aspect of human life is affected by science – from birth control, to parent leave, to youth psychology, to cancer therapies – science is with us from before we are conceived to our very last breath.

As a result of these grand advances, science has been attributed with ever growing responsibilities and respect. We talk about the knowledge society, where everything is or will be solved or explained by science. The privileged role of science is however under threat. The competition for scientific funding is tightening, and the merit of all scientific action is also judged by the needs of society. This has resulted in more funding for engineering and physical sciences, and questions about the real benefit of the humanities and social sciences. It has also resulted in research that exaggerate findings. Articles with supposedly groundbreaking findings tend to get cited, articles who check those findings rarely do.

Problem solving and program research

In 1945, Vannevar Bush published Science the Endless Frontier, where he states the importance of the “free play of intellects”. Leaving curious scientists to solve problems of their interest would result in major scientific and technological advances, according to Bush. The message of Bush was welcomed and resulted in an enormous increase in funding for basic science. One of the major enablers for this was the science behind the atomic bomb, which showed how basic science can result in huge technological leaps. This was also the prediction for the future of science.

Contrary to Bush’ predictions, scientific and technological advances have largely been the result of programmatic research, largely funded by institutions like DARPA. The cold war brought major funding for military program research that gave us the technology underlying GPS and almost every component of the Iphone. Surely, basic research has resulted in many important scientific and technological contributions, but the major technological leaps have almost always been connected to some real-world problem in need of a solution. As an example, the technology underlying incubators for premature babies is a direct application of the technology of space suits. Would we even have this technology
without the massive funding for space travel during the cold war?

Replication problems

Science faces many challenges today, despite its historical success. The very norms internal to science are under threat: Science is supposed to provide us with objective and verifiable, independent truths. Recent replicability projects
have shown that much of our scientific knowledge does not live up to this ideal.

The last years have seen a growing concern about the reliability of peer-reviewed science. One of science’s problems is that it is in part judged by citation, a system that is not always reliable. In this case, some articles that contain major flaws have been cited and used by other authors, and their articles have also been cited numerous times. Unreliable knowledge then solidifies and confusion arises. Furthermore, a biotechnology company recently found that only six out of fifty-three “landmark” studies it sought to validate could be replicated. Lastly, a compilation of nearly one hundred fifty clinical trials for therapies to block human inflammatory response showed that even though the therapies had supposedly been validated using mouse model experiments, every one of the trials failed in humans. Science is in a state of rapid evolution, but it seems to lack sufficient means to make sure that the science good, reliable and replicable.

Prescription: opening up to society

Daniel Sarewitz points to several challenges facing science. Not all of them can be mentioned here. Instead we should address how to solve this crisis. Sarewitz says that science needs to open up to society and be humble about its responsibility
to the general public, and also be humble about what science can and cannot do. One of the reasons why science should “open up” is that it fails to communicate to the general public what it is actually doing. Many areas of science are today so advanced that even undergraduate students in their discipline have problems understanding the research. Science is tricky, as it should be. But this does not exempt scientists from explaining what the purpose of their work is all about. Better communication can both help legitimizing research and open up for more funding, as well as involving the public in the scientific enterprise.

Science with and for society

Since 2012, European science policy has focused on these problems and formulated a vision for “Science with and for society”. Out of this comes the work on RRI – responsible research and innovation. Responsible research and innovation is an attempt at involving stakeholders in scientific enterprise and technological development. RRI has become a way of incorporating societal concerns, “grand challenges” and ethics into research and business development. By doing so, science is held accountable for its impact on society. It also reflects science’s need of public funding. It must therefore be clear what the purpose of scientific endeavours are, and how they contribute back to society.

These trends might be a way for science and society to get more in tune with each other, and at the same time solve some of the problems integral to science

 

Not exactly rocket science

By Joar Kvamsås

When a rocket belonging to Elon Musks’ Space X programme exploded upon launch in September of last year, it was a PR disaster. Not only did it cause the destruction of a $200 million satellite financed by Marc Zuckerberg that was meant to expand internet services in Africa, but Musk himself has gone out and called it the “toughest puzzle” the Space X programme has had to solve – a programme which ambition is to eventually develop affordable interplanetary travel between Earth and Mars.

Today, the idea of a rocket misfiring and exploding for unknown reasons appear deeply unsettling. NASA has given us the impression that launching rockets is an exact science, the purview of only the brightest minds of physics and engineering. In the public imagination, rocket science is seen as the ultimate achievement of modern physics; the practical application of an exact mathematical understanding of the laws of nature, put in use to explore the outer frontiers of our universe.

However, this conception of rocket science is a surprisingly recent one. By most accounts, modern rocket science originated with a small group of graduate students known as the ‘Rocket Boys’ working at Caltech in the 1930s. At the time, rocket science – or ‘rocketry’, as it was commonly known – was not regarded as a respectable scientific area of study. Toying around with explosive substances in the hope of launching objects into the air was an idea reserved for science fiction writers and lunatics. The Caltech group soon earned the nickname the ‘Suicide Club’, which became particularly popular once they were booted off campus after blowing up part of a university building in one of their experiments.

Getting thrown off the premises did not deter the group, however. Taken under the wing of renowned aeronautics professor Theodore von Karman, the group was soon allowed back on campus, before (after more explosive accidents) they were given their own premises consisting of tarpaper shacks, far from any valuable buildings. The Rocket Boys were given a space to fail, to fail spectacularly and loudly. And fail they did; through a series of mislaunches and explosions, the group used trial and error to explore the principles of rocket science that would one day be used to land people on the moon. By 1944, the group had attracted both interest and funding from the US Armed Forces, and had taken on the more respectable name the Rocket Propulsion Laboratory.

In accordance with the popular conception at the time, the Suicide Club was populated by its fair share of eccentric characters. Among these was the larger-than-life personality Jack Parsons, a high school drop-out and self-taught chemist who became a central figure in early rocket fuel research. He was recognised for his talents as an amateur rocket enthusiast and his expertise on powder explosives, the latter of which he enhanced by visiting industrial accidents to determine the cause of explosions. Parsons was also an avid occultist, and was in 1944 discharged from the Jet Propulsion Laboratories for his involvement in the Thelemite group Ordio Tempio Orientis. Having been ousted from the rocketry community after being accused of espionage during the 50’s McCarthyism, he died at age 37 from an explosion in his home laboratory. While his importance to the US rocketry programme was long downplayed, it has in later years been recognised in a series of biographies with titles such as Sex and Rockets and Strangle Angel: The Otherwordly Life of Jack Parsons.

The story of Parsons and the Caltech Rocket Boys gives us some perspective the most recent developments of modern-day space travel. While Elon Musk’s dream of a colony on Mars seems like the works of science fiction now, the idea is nowhere near as outlandish as the idea of rocketpropelled travel was in the 1930s. And while it might seem far-fetched that interplanetary travel should spring from the dreams of an internet billionaire, Musk would certainly not be the most eccentric character to have a defining impact on human space travel. Lastly, failed attempts and unforeseen setbacks are an integral part of research and invention. And when you are researching rocketry, you should not be surprised to see those mistakes come in the form of great big explosions.

The Hidden Ninja – Augmented Reality


By Martin Sandtrøen

Augmented reality is everywhere. Not just in glasses produced by Google. Computers have it. Televisions have it. So do mobile phones. Yet we do not notice it. Follow me on an augmented journey and learn why this phenomenon is important.

What is augmented reality?

What we see exist. At least, that is what we think. However, our mind is not always able to discern between what is real and what is not. That is the reason illusions trick us, making us believe what is not there. Augmented reality is real. Partly. It is the combination of what is real and what is artificial. However, what does that actually mean? It means that we look at a picture of the “real” world,
but with slightly modified “colors”. Instead of just seeing a normal and gray building, we see new colors and vibrant elements added, with perhaps some text describing its inhabitants for extra flavor. A map drawn by human hands that change in accord with our position, with a voice telling us how to interpret the many lines written down on it. This is augmented reality. Augmented reality might seem innocent. Sure enough, only slightly “enhancing” reality is not necessarily bad. Imagine your dreams coming to life. Extinct animals roaming freely through the nature when viewed with your mobile phone’s camera. Catching Pokémon that are flying in the air, playing in the sand box or running across the road.

Not only a sweet story

Augmented reality is not only innocent. It gives new life in places it does not belong. Imagine seeing the extinct animals and Pokémon replaced by your ancestors. These ancestors would not just roam around, but discuss with you, trying to influence your opinions. Imagine these ancestors not limited by a mobile phone camera, but right there next to you. They would be so close that you can almost touch them. Does that sound unlikely? Well, it is already happening. In 2012, Tupac performed live at Coachella. The thing is: Tupac died in 1996. The version of Tupac performing was only computer-generated imagery projected onto the stage. While it was obvious that the projection was not Tupac, it does pose valid questions.

What will we do when we cannot tell reality from the augmented one? In the 1970s, brilliant engineers developed a display capable of immersion. It was the first system capable of augmented reality. This headset could only show simple structures such as boxes, circles and lines. Thus, its effect would hardly pass as realistic. During the 1990s, augmented reality became increasingly common and blended more easily into our everyday life. What was hardly realistic before suddenly became evidence taken at face value. You might have noticed the weather report on television. The person in a suit pointing to a background picture dominated by suns, clouds, rain and wind. Names following the football players around the eld making it easy to interpret who is doing what. Times and dates printed directly on videos. Breaking news-bars when something important has happened in the world. We believe in the authenticity of these augmentations. We believe that this is “real”. However, it is just pixels on a screen. These pixels have made us dependent on the information they provide. We would not go to the beach without checking the weather forecast; we would not question the breaking news on TV; we would not drive without our trustworthy GPS. Augmented reality is not necessarily only in screens either. Tupac’s projection show that augmented reality can escape the boundaries of a screen and blend further into our reality. The graphics have evolved from the headset in the 70’s and into highly advanced standards. Augmented reality is not simple projections onto the world anymore, but highly sophisticated concepts able to trick people. What if it was Tupac performing live? What if it was our ancestors talking to us? If we were unable to separate between the reality and the augmented, how would we know what to trust?

The future will undoubtedly bring more augmented reality into our lives. The question is how we should deal with it. We are transitioning into a society where the difference between the real and the artificial are increasingly blurred. We are getting more dependent on augmented reality. But what if the information presented through augmented reality was lying? Are we being too dependent on the information it provides? What if augmented reality is like a hidden Ninja, secretly waiting until we are at our most vulnerable?

The future of how we see the future

By Frans Joakim Titulaer

The phenomenon of the tech convention is pulling an ever wider range of industries into its swirl. Tech is no longer simply the computer industry; rather, it claims to be the driver of the next industrial revolution. I visited Slush, the largest tech convention in Northern Europe, and was astounded. What we saw was something that currently lacks a name or a description. It was simply called Slush.

Arriving in Helsinki, I was surprised to find that my cheap and rather lousy hostel was fully booked. Every hostel, airbnb and hotel in town was packed. Slush gathers 17,000 techies from around the world in the Finnish capital each year, and is now also establishing itself in Tokyo and Beijing. I had won my ticket at a so-called ‘hackathon’. Before attending the hackathon, I didn’t know what Slush or a hackathon was. In their attempts to stimulate a new high-tech economy, Innovation Norway and its equivalents across Scandinavia are investing heavily in teaching potential entrepreneurs about new ways of building businesses. One of the strategies for engaging and teaching young entrepreneurs is the hackathon which is a pitching competition where one has about 24 hours to prepare a demonstration of an business idea that can convince judges or investors of its viability.

After having undergone a day of pitch coaching at Slush, we arrived at the convention hall, passing under a sea of neon glowing dream catchers. A hanging mist across the hall and lasers moving overhead recreated the aura of a bass pumping rave festival, only without the bass. The entire thing was designed
to attract attention. It was like a reality show in which the idea of reality had been replaced with a strange idea of the future. The viability of the ideas on display were not in question, because as opposed to the small hackathon I entered in Norway, these companies had already established themselves in the market and were here seeking investors and partners.

There were eight stages, the largest of which was surrounded by a fire-spewing moat, accompanied by lights and visual effects that could compete with any music festival. And like at a music festival, there are stands selling t-shirts to young people from all over the world who are lining up to document the experience. The walls are clad in black, and all the decorations are custom made. Food stall tables are covered with sheets of old Nokia dial boards and lamps filled with electric cables. In an effort to stay hot and provocative, Helsinki have left itself looking like a Nokia graveyard. And did it well. Slush is itself only possible because of the information technologies that Nokia once made, which today make it possible for people to travel with a sense of ease. This technology allows hundred of thousands of ‘kids’ to gather at festivals each year. These are logistical wonders.

At my hostel, my new friends told me that being successful at Slush is all about leveraging your contacts on social media to expand your network. From your own office or living room, you figure out who’s who and how to approach them at the right moment. Instead of building local businesses, one seeks a network of professionals to ‘be’ global, not to go global. One conversation about the future of venture capitalism described this trend well. It is clear, it was claimed, that corporations no longer innovate. Since every firm must compete to incorporate the best ‘venture mechanisms’ and to test out startup products in their corporate sale ventures, capitalists are becoming their own media houses and competing independently to reach wider audiences.

This year the Crown Prince of Norway was one of the Norwegian representatives at the conference. The Crown Prince, and countless government institutions present, were actively promoting a new kind of ethics. One independent speaker presented it as an ethics in which data should be open, just because it could. Further, it is an ethics in which a service should be extended because products in the digital age are reproduced at zero marginal cost. Like venture capitalists, governments are beginning to show their presence on the stage of new media, and not only in the world of politics.

On a systemic level, governments are fighting to stay attractive in the hyper-capitalistic sphere of tech. For sure, government has always played a major part in economic and socio-technical development, but new technological solutions under the banner of Big Data and Arti cial Intelligence are bringing solutions down from the population to the individual level. Systems that help make management decisions about humans relations (HR), that analyse agile team performance, and that adapt knowledge management systems to the learning behavior of individuals, will transform not only business but organizational work and learning.

These are technologies of mass education at a completely different level, and a festival like Slush represents a new form of regulated freedom. The technologies that create this hybrid between a festival and a market seem to be the same as those sought in government and large business. These skills and resources are what makes Slush the newest and hottest event in the world of tech, and each and every one of us contribute to it, with our minds, our networks and our convictions to these technologies.