Back to Insights

The Foundations of Digital Ethics

The most important part of any ethical system is its foundational layer. To make effective, ethically-informed decisions in the digital realm, we need to first explore the principles guiding our idea of right and wrong. In this Insight, provided by our partners at the ILO Institute, we explore the challenges of establishing a consistent set of digital ethics.

The Foundations of Digital Ethics

We continue to work on a range of digital-ethics issues with a few of our members—clearly, the future of innovation is going to involve a lot of work on how we do ethical choice-making into all of our systems. Here's part of a larger project to help make more of these challenges visible.

We often make the mistake of believe that being “ethical” is always good—that the problem with bad actions is that they are without ethics. In fact, being ethical means following a set of principles and expectations, and sometimes, if those base values or first layers of an ethics stack are not truly what we feel is right and good, the whole stack will become a machine that creates bad outcomes.

The Bay of Bengal, 1943

In 1943, the entire region of Bengal—from the southern beaches to the hundreds of miles of thinly inhabited villages stretching north toward China—suffered through an extreme moment of privation. Famine swept through the province, killing three million people out of a total population of 60 million. Virtually every family, every town and every neighborhood was touched.

World War Two was part of the story. England was still the colonial occupier of India, significant fighting was underway in nearby Burma, and the Japanese occupation of China had ripple effects through Asia.

It’s easy to categorize as “unethical” the decisions made to prioritize the feeding of British troops over the interests of the farmers and traders who produced and delivered the food. It’s easy to condemn the practice of brutal colonial occupation as “unethical.” Yet in both cases, the actions of the occupying colonial powers were largely ethical based on their own standards of right and wrong. Many would fully condemn the principle that “advanced” societies have an obligation to colonize more “primitive” nations—but that was a common view at the time. It was wrong. It deserved and deserves condemnation for moral reasons. But ethics is about consistency with underlying principles. In digital ethics, and in organizational ethics, we spend too much looking for and talking about process errors when we should instead be talking more about underlying principles and values.

Here’s how economist Amartya Sen emerged from that time and place in the Bay of Bengal, and won the Nobel prize for explaining how this idea of underlying principles should change the way we think of mass hunger.

Food production fell as farmers took up arms and went off to war, though the worst effects were felt by families of traders and artisans whose customers fell away as they devoted all their money to rapidly inflating food costs.

A debate continues today about whether the three million lives lost to starvation in Bengal in 1943 were the result of a “real” famine—a dramatic reduction in the amount of food available across the whole population of the area—or a failure of social and market forces to make stockpiled food available to those who needed it.

Certainly the failure of a rice crop in the winter of 1942 caused by monsoon rains, cyclones and tidal waves in shoreline agricultural areas set the stage. Also in 1942, the Japanese invaded Burma and cut off the Burmese rice crop from Bengali traders. Waves of refugees from Burma added to the food demand as well. By early 1943, the famine was evident in the streets of Calcutta and the villages spreading south toward the coast.

A boy of nine witnessed the famine, and eventually became its great chronicler and explainer. Later this boy, Amartya Sen, would win the Nobel Prize in Economics for related work. Sen studied the famine and determined that, in fact, there was more than enough food to feed the millions who perished in Bengal in 1943. The problem was with competing demands from local traders and clan leaders who were inclined to hoard food as war came near, and from the imperial British government valuing support of its troops—money and food that would otherwise have sustained the starving Bengalis—over the lives of its colonial subjects.

Sen’s extensive research documented that today—with very few exceptions—when populations starve, the failure is fundamentally political and not agricultural. He acknowledges that crop failures, extreme weather and bad planning often result in food shortages, but even when food is scarce in a region, enough good will and surplus food exists elsewhere that in almost all cases money and food are available to prevent large-scale starvation. To oversimplify a bit, in the world we live in today when populations starve what’s missing is not so much the food, but the will of institutions—political and economic—to help them. That’s the big idea that Sen’s careful documentation established.

The question is whether governments and other social leaders value the lives of the people who are hungry more than they value the food and money they might keep for themselves. Once we can isolate this question about valuing the building blocks of wealth more than the lives of poor people as a first-stack question, we can begin to see the root of the disaster in Bengal in 1943 as a direct result of a well-functioning ethics stack built on a terrible first layer.

This matters enormously for digital ethics because in an era of breakthrough technology growth, we can too easily overlook the fact that political problems require political solutions, and technology solutions to political problems just don’t work.

Ideas and values matter—not just technology. Thinking about social values and the effectiveness of our institutions is vital for the value of technology to emerge and actually help us live better lives. Ideas and values surround our technologies, they support them and extend them and determine whether the value of culture’s greatest creations will reflect the values that set them in motion, or not.

Deeply experienced and very well-intended technology leaders often hit this kind of wall when great technical innovations encounter ideas or values that we might fairly call broken.

“I am Insulted”

Consider this related scenario:

A technology visionary meets with the minister of education in a low income country and demonstrates a low-cost laptop computer newly engineered to cost so little that, with the aid of a philanthropist, the tech visionary is offering to deliver 100,000 of these machines free of charge for use by children in the country’s poorest schools.

The government minister—himself not poor, a graduate of an elite European university and very technology savvy—turns on the small laptop and immediately notices that the operating system is a free shareware platform. “Why not Microsoft?” he asks. “Because we’d need to pay a licensing fee for the name-brand OS. With the shareware, we get solid performance and no cost at all.”

“Ah,” says the minister, “so you are saying that white people in the West can have Microsoft but brown people here in my country must have shareware! I am deeply insulted, on behalf of all of the people in my country. The answer is no. Now please leave.”

So what’s just happened here? A highly-educated decision-maker has valued his personal sense of prestige and status more than the experience of the people he is, we would assume, in office to serve. Better they should have nothing than something they would likely find very valuable but which would embarrass the man in charge, because of its “off brand” operation system.

While we can recognize that there’s something wrong with this government minister’s decision here, just as we would recognize that there is something wrong with the decision of a provincial government minister guarding storehouses of food while people all around him starve, it’s vital that we avoid the trap of judging every ethical challenge based on how it makes us feel in the moment. Not only does that kind of decision-making tend to be inconsistent, but it hurts our ability to make decision in ambiguous or unplanned circumstances. U.S. Supreme Court Justice Potter Steward actually planted a flag for this kind of poor ethical decision-making when he wrote in a landmark 1964 decision that while he could not define pornography, “I know it when I see it.”

Far better to craft principles that reflect our values, and apply them not only to explain what’s wrong, but to set out guides to our future behavior, when those principles can help us make decisions when our “guts” are confused or unreliable. For both the store-house decision maker choosing to hoard food that could and should save lives, and for the government minister rejecting the gift of 100,000 lap-tops because of his wounded sense of pride and desire for prestige, we can point to a useful ethical guideline that comes from the philosopher Jeremy Bentham. We might want to put it this way: “People with responsibilities for others need to make sure that they seek to do the greatest good for the greatest number of those they serve.”

If we believe this, we not only explain why we think these two leaders failed the people they served, but we also now can apply it when we make decisions about new situations and new technologies.

The U+ method ensures efficient, cost-effective development, implementation, and improvement of innovations in any sector. To date, we have used this method to bring 100+ products to market, creating over $2 billion in value for Fortune 1000 companies. Check out our success stories.

Launched in 2005, ILO is a membership organization for large companies, government agencies and not-for-profits, bringing senior executives leading innovation together for knowledge sharing and community building. ILO has completed more than 300 best-practice research reports, focusing on emerging challenges and opportunities. To learn more about ILO, membership benefits, and how to join, visit www.iloinstitute.net.

Don't miss the opportunity to boost your innovation strategy 100x and unlock the full potential of your next venture. Learn more about this game-changing platform, and book a demo here.