Skip to main content

AI ethics is all about power

Code of Ethics in Technology
Image Credit: Getty Images
This article is part of a VB special issue. Read the full series here: Power in AI.

At the Common Good in the Digital Age tech conference recently held in Vatican City, Pope Francis urged Facebook executives, venture capitalists, and government regulators to be wary of the impact of AI and other technologies. “If mankind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest,” he said.

In a related but contextually different conversation, this summer Joy Buolamwini testified before Congress with Rep. Alexandria Ocasio-Cortez (D-NY) that multiple audits found facial recognition technology generally works best on white men and worst on women of color.

What these two events have in common is their relationship to power dynamics in the AI ethics debate.

Arguments about AI ethics can wage without mention of the word “power,” but it’s often there just under the surface. In fact, it’s rarely the direct focus, but it needs to be. Power in AI is like gravity, an invisible force that influences every consideration of ethics in artificial intelligence.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

Power provides the means to influence which use cases are relevant; which problems are priorities; and who the tools, products, and services are made to serve.

It underlies debates about how corporations and countries create policy governing use of the technology.

It’s there in AI conversations about democratization, fairness, and responsible AI. It’s there when Google CEO Sundar Pichai moves AI researchers into his office and top machine learning practitioners are treated like modern day philosopher kings.

It’s there when people like Elon Musk expound on the horrors that future AI technologies may wreak on humanity in decades to come, even though facial recognition technology is already being used today to track and detain China’s Uighur Muslim population on a massive scale.

And it’s there when a consumer feels data protection is hopeless or an engineer knows something is ethically questionable but sees no avenue for recourse.

Broadly speaking, startups may regard ethics as a nice addition but not a must-have. Engineers working to be first to market and meet product release deadlines can scoff at the notion that precious time be put aside to consider ethics. CEOs and politicians may pay ethics lip service but end up only sending sympathetic signals or engaging in ethics washing.

But AI ethics isn’t just a feel-good add-on — a want but not a need. AI has been called one of the great human rights challenges of the 21st century. And it’s not just about doing the right thing or making the best AI systems possible, it’s about who wields power and how AI affects the balance of power in everything it touches.

These power dynamics are set to define business, society, government, the lives of individuals around the world, the future of privacy, and even our right to a future. As virtually every AI product manager likes to say, things are just getting started, but failure to address uneven power dynamics in the age of AI can have perilous consequences.

The labor market and the new Gilded Age

A confluence of trends led to the present-day reemergence of AI at a precarious time in history. Deep learning, cloud computing, processors like GPUs, and compute power required to train neural networks faster — technology that’s become a cornerstone of major tech companies — fuel today’s revival.

The fourth industrial revolution is happening alongside historic income inequality and the new Gilded Age. Like the railroad barons who took advantage of farmers anxious to get their crop to market in the 1800s, tech companies with proprietary data sets use AI to further entrench their market position and monopolies.

When data is more valuable than oil, the companies with valuable data have the advantage and are most likely to consolidate wealth or the position of industry leaders. This applies of course to big-name companies like Apple, Facebook, Google, IBM, and Microsoft, but it’s also true of legacy businesses.

At the same time, mergers and acquisitions continue to accelerate and further consolidate power, a trend that cements other trends, as research and development belongs almost entirely to large businesses. A 2018 SSTI analysis found that companies with 250 employees or more account for 88.5% of R&D spending, while companies with 5,000 employees or more account for nearly two-thirds of R&D spending.

The growing proliferation of AI could lead to great imbalance in society, according to a recent report from the Stanford Institute for Human-Centered AI (HAI).

“The potential financial advantages of AI are so great, and the chasm between AI haves and have-nots so deep, that the global economic balance as we know it could be rocked by a series of catastrophic tectonic shifts,” reads a proposal from HAI that calls for the U.S. government to invest $120 billion in education, research, and entrepreneurship investments over the next decade.

The proposal’s coauthor is former Google Cloud chief AI scientist Dr. Fei-Fei Li. “If guided properly, the age of AI could usher in an era of productivity and prosperity for all,” she said. “PwC estimates AI will deliver $15.7 trillion to the global economy by 2030. However, if we don’t harness it responsibly and share the gains equitably, it will lead to greater concentrations of wealth and power for the elite few who usher in this new age — and poverty, powerlessness, and a lost sense of purpose for the global majority.”

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, studies the impact of AI on the future of work and spoke recently at a Stanford AI ethics symposium. Regarding the number of jobs suitable for machine learning that are likely to be replaced in the years ahead, Brynjolfsson said, “If you look at the economy overall, there’s a tidal wave coming. It’s barely hit yet.”

Machine intelligence can be used to redesign and augment workplace tasks, but it is most often used to replace jobs, Brynjolfsson said.

Automation’s impact on job loss is predicted to differ city to city and state to state, according to both Brookings Institution analysis and research by Brynjolfsson and Tom Mitchell of Carnegie Mellon University. Fresno, California is expected to get hit harder than Minneapolis, for example, but job instability or loss is expected to disproportionately impact low income households and people of color. A recent McKinsey report says that African-American men are expected to see the greatest job loss as a result of automation.

This follows a trend of median income in the United States remaining stagnant since 2000. The end of the rise of median income tied to a rise in productivity is what Brynjolfsson calls “the great decoupling.”

“For most of the 20th century, those roles in tandem — more production, more wealth, more productivity — went hand in hand with the typical person being better off, but recently those lines have diverged,” he said. “Well, the pie is getting bigger, we’re creating more wealth, but it’s going to a smaller and smaller subset of people.”

Brynjolfsson believes AI community challenges have led to great leaps forward in state-of-the-art AI like the DARPA autonomous vehicle challenge and ImageNet for computer vision, but businesses and the AI community should begin to turn their attention toward shared prosperity.

“It’s possible for many people to be left behind and indeed, many people have. And that’s why I think the challenge that is most urgent now is not simply more better technology, though I’m all for that, but creating shared prosperity,” he said.

Tech giants and access to power

Another major trend underway as AI spreads is that for the first time in U.S. history, the majority of the workforce are people of color. Most cities in the U.S. — and in time, the nation as a whole — will no longer have a racial majority by 2030, according to U.S. Census projections.

These demographic shifts make lack of diversity within AI companies all the more glaring. Critically, there’s a lack of race and gender diversity in the creation of decision-making systems — what AI Now Institute director Kate Crawford calls AI’s “white guy problem.”

Google 2019 gender and race tech workforce representation

Above: Google 2019 gender and race stats for technical workforce representation

Image Credit: Google 2019 Diversity Report

Only 18% of research published at major AI conferences is done by women, while at Facebook and Google, only 15% and 10% of research staff, respectively, are women, according to a 2018 analysis by Wired and Element AI. Google and Facebook do not supply AI research diversity numbers, spokespeople from both companies said.

A report released in April by the AI Now Institute details a “stark cultural divide between the engineering cohort responsible for technical research and the vastly diverse populations where AI systems are deployed.” The group refers to this as “the AI accountability gap.”

The report also recognizes the human labor hidden within AI systems, like the tens of thousands of moderators necessary for Facebook or YouTube content, or the telepresence drivers in Colombia who are remotely driving Kiwibot delivery robots near UC Berkeley in the San Francisco Bay Area.

Above: Facebook technical workforce by ethnicity

Image Credit: Facebook 2019 Diversity data

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

One of those fellows, Mutale Nkonde, is coauthor of the Algorithmic Accountability Act, legislation introduced in both houses of Congress earlier this year that charges the Federal Trade Commission (FTC) with assessment of algorithmic bias and allows the agency to issue fines based on company size.

She’s also executive director of AI for the People and a Berkman Klein Center of Internet and Society fellow at Harvard University. She’s now working to assess how artificial intelligence and misinformation may be used to target African-Americans during the 2020 election. A Senate Intelligence committee investigation released in October found that steps to interfere in the 2016 election singled out African-Americans on Facebook, Twitter, and Instagram.

Before that, she and a small team worked on advancing the idea of racial literacy.

Nkonde and coauthors posit that things like implicit bias training and diversity initiatives — championed by tech giants that release annual diversity reports — have failed to move the needle on creating a tech workforce that looks like its users. In order to make meaningful progress going forward, businesses should put aside vague aspirations and begin taking practical steps toward educating people in racial literacy.

“The real goal of building capacity for racial literacy in tech is to imagine a different world, one where we can break free from old patterns,” reads a paper explaining the racial literacy framework. “Without a deliberate effort to address race in technology, it’s inevitable that new tech will recreate old divisions. But it doesn’t have to be that way.”

Coauthors want racial literacy to become part of the curriculum for computer science students and training for employees at tech companies. Their approach draws on Howard Stevenson’s racial literacy training for schools and includes implicit association tests to determine the stereotypes people hold.

Racial literacy aims to equip people with the training and emotional intelligence to resolve racially stressful situations in the workplace. This could include computer scientists and designers, as well as machine learning engineers, allowing them to speak openly about how a product or service can perpetuate structural racism or lead to adverse effects for a diverse group of users.

The objective is to allow people to speak in an open and non-confrontational way about what can go wrong with a product or service. In interviews with employees from mid-sized and large tech companies, the researchers found that in many tech firms, confronting issues associated with race was taboo.

“Many of the barriers that came up in the interviews, and even anecdotally in our lives, is that people don’t want to acknowledge race. They want to pretend that it doesn’t matter and that everybody is the same, and what that actually does is reinforce racist patterns and behavior,” Nkonde said. “It would mean companies have to be clear about their values, instead of trying to be all things to all people by avoiding an articulation of their values.”

Racial literacy will be increasingly important, Nkonde believes, as companies like Alphabet create products that are of critical importance to people’s lives, such as healthcare services or facial recognition software sold to governments.

The other intended result of racial literacy training is to create a culture within companies that sees value in a diverse workforce. A Boston Consulting Group study released last year found higher rates of revenue and innovation in organizations that had more diversity. But if hiring and retention numbers are any indication, that message doesn’t seem to have reached Silicon Valley tech giants.

LinkedIn senior software engineer Guillaume Saint-Jacques thinks AI ethics isn’t just the right thing to do; it makes sound business sense. One of the people behind the Fairness Project launched this summer, Saint-Jacques says bias can get in the way of profit.

“If you’re very biased, you might only cater to one population, and eventually that limits the growth of your user base, so from a business perspective you actually want to have everyone come on board … it’s actually a good business decision in the long run,” he said.

Individual autonomy and automation

Powerful companies may yield their might in different ways, but their business plans have consequences for individuals.

Perhaps the best summary of this new power dynamic comes from The Age of Surveillance Capitalism by retired Harvard Business School professor Shoshana Zuboff. The book details the creation of a new form of capitalism that combines sensors like cameras, smart home devices, and smartphones to gather data that feeds into AI systems to make predictions about our lives — like how we will behave as consumers — in order to “know and shape our behavior at scale.”

“Surveillance capitalism unilaterally claims human experience as a free raw resource material for translation into behavioral data. Although some of these data [points] are applied to product or service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence’ and fabricated into prediction products that anticipate what you do now, soon, and later,” wrote Zuboff.

She argues that this economic order was created in Silicon Valley by Google but has since been adopted by Amazon, Facebook, and Microsoft, as well as counterparts in China like Baidu and Tencent.

Zuboff describes surveillance capitalism as an unprecedented form of power that few fully understand yet and says no effective means of collective or political action currently exists to confront it.

She questions the havoc that surveillance capitalism might wreak on human nature, when the market “transforms into a project of total certainty.” Zuboff says that left unchecked, this relatively new market force can overthrow people’s sovereignty and become a threat to both Western liberal democracies and the very notion of being able to “imagine, intend, promise, and construct a future.”

These large companies “accumulate vast domains of new knowledge from us, but not for us,” she wrote. “They predict our futures for the sake of others’ gain, not ours. As long as surveillance capitalism and its behavioral futures markets are allowed to thrive, ownership of the new means of behavioral modification eclipses ownership of the means of production as the fountainhead of capitalist wealth and power in the 21st century.”

A key byproduct of surveillance capitalism, Zuboff argues, is an overwhelming sense of helplessness. That’s what you see when people shrug their shoulders and say there’s no stopping big tech companies with their massive resources and wealth.

Whistleblower Edward Snowden seems to agree with Zuboff’s assessment.

Metadata collection is increasingly being used by businesses and governments for decision-making that impacts human lives, from tracking your movement with a mobile device to China’s “social credit scores.” The purpose of that data, in aggregate, is to take away personal agency, Snowden said in a recent NBC News interview when he was asked why people who do not commit crimes should care about surveillance technology.

“These activity records are being created and shared and collected and intercepted constantly by companies and governments. And ultimately, it means as they sell these, as they trade these, as they make their businesses on the backs of these records, what they are selling is not information. What they’re selling is us. They’re selling our future. They’re selling our past. They’re selling our history, our identity, and ultimately, they are stealing our power and making our stories work for them,” he said.

Princeton University associate professor Ruha Benjamin, author of the book Race After Technology, is also concerned about the issue of agency, because whether people espouse visions of AI bringing about Armageddon or utopia, both are talking about ceding power to machines.

“Whether it’s this idea that technology will save us or slay us, these are both giving up power,” Benjamin said at Deep Learning Indaba, held at Kenyatta University in Nairobi, Kenya. “They are both giving up our agency in terms of our own role in the design of these technical systems — that we don’t just submit to the default settings the things that others have created, but we have actually tried to embed our own values and imagination into these structures, so we want to move beyond a techno-determinism.”

A very different embodiment of the power of the individual has emerged inside major companies. For example, roughly a year ago, more than 20,000 Google employees around the world walked out in protest of multiple ethical issues. These include Android founder Andy Rubin’s $90 million payout following alleged sexual harassment, the end of forced arbitration, and Google’s participation in the Pentagon’s Project Maven, according to organizers.

Months after thousands of Google employees signed a letter protesting the company’s participation in the AI project for drone object detection, Google pledged to end its Maven contract in 2019 and released a set of AI principles, which includes a pledge to not create autonomous weapons.

Similarly, Facebook workers have called on CEO Mark Zuckerberg to fact-check or ban political advertising, while employees at Microsoft and GitHub called for the end of a contract with ICE, the Department of Homeland Security agency that carries out Trump deportation and detention policies.

It takes courage and organization to challenge big tech companies — especially for those employed by them — but what these protests demonstrate is that individuals can regain some agency, even in the face of behemoths.

Government and society

With AI’s present-day resurgence, Elon Musk has become a modern-day Paul Revere, sounding the alarm about killer robots and artificial general intelligence (AGI). When Vladamir Putin famously said that the nation that controls AI will control the world, Musk responded by saying he thinks the AI arms race will lead to World War III.

Musk joined more than 4,500 AI and robotics researchers and signed a Fight for the Future open letter in opposition to autonomous weapons that act without human intervention. If or when nations introduce autonomous killer robots with the power to choose whether people live or die, they may indeed become the ultimate expression of power.

But while figures like Musk lavish attention on hypotheticals, facial recognition is already being used in Hong Kong — likely the main reason the government enacted a ban on masks — and the Detroit Police Department is already testing real-time facial recognition. Meanwhile, algorithms are returning results thought to negatively impact the lives of millions of African-Americans, with poor performance on people with non-binary gender and people of color.

AGI scenarios like The Terminator’s Skynet may not be here yet, but militaries are already considering the ethical application of AI.

Proliferation of AI among national militaries is a real thing to be worried about as China and Russia charge forward with investments and the Pentagon considers its next steps.

The National Security Council on Artificial Intelligence was created by Congress a year ago and last week delivered a draft report about the impact of AI on warfare. What’s at stake is nothing less than how “development of AI will shape the future of power” and how the U.S. can maintain economic and military supremacy.

A week earlier, a board made up of tech executives and AI experts introduced AI ethics principle recommendations for the Department of Defense.

The Department of Defense is currently considering this set of AI ethics principles, drafted by some of the nation’s chief technologists, like former Google CEO Eric Schmidt, MIT CSAIL director Daniela Rus, and LinkedIn cofounder Reid Hoffman.

Deepfakes are a particular source of concern in 2020, a U.S. presidential election year, researchers and lawmakers say. And bot networks are being used on Twitter to amplify national agendas, like bots championing the Saudi Arabian government after the murder of former Washington Post journalist Jamal Khashoggi or the Russian bots that worked to get Donald Trump elected president in 2016. Bots were also found to be part of a Russian influence campaign carried out in South Africa.

AI, power, and civil society

While battles are fought around the use of AI to control political speech online, new issues are continuing to crop up, like bias that has led intermediary and advocacy organizations to request tech giants ban the use of algorithms to replace judges for pretrial bail assessment.

Created by AI researchers at companies including Apple, Facebook, and Google, the Partnership on AI brings groups like Amnesty International and Human Rights Watch together with the biggest AI companies in the world. In an interview with VentureBeat this summer, executive director Terah Lyons said that power is central to the AI ethics debate between the nongovernmental organizations and tech giants considering how AI will impact society.

“I think it’s central and crucial, and we think in terms of power a lot in our work, because it’s an impoverished conversation unless you’re talking about it,” she said. “Power is one of, if not the, central question.”

She sees power at play in the lack of diversity in the AI industry and corresponding lack of influence over the way systems and tools are built and deployed, as well as in the power and influence of individuals within tech companies and institutions.

It’s also important to keep those dynamics in mind when civil society and nonprofit agencies with modest resources work with resource-rich tech giants. Groups like the Partnership on AI and Stanford’s Institute for Human-Centered AI champion this multi-stakeholder approach to creating representative processes, but staff from nonprofits often can’t afford to hop on a plane for a Partnership on AI stakeholders meeting, for example, like the employee of a tech giant could.

“By comparison to these large and especially well-resourced tech companies, there’s just a difference in power and resourcing there in many ways, and so empowering them more effectively I think is a big piece of how you start to level the playing field for effective collaboration,” she said.

The same travel limitations can impact AI researchers interested in attending international conferences. Visa applications from roughly half of attendees to a Black in AI workshop at NeurIPS in Montreal, Canada were rejected by immigration officials last year, and applicants report the same situation again this year. Incidents like these have led Partnership on AI to urge nations to create special visas for AI research conference travel, like the kinds made for medical professionals, athletes, and entrepreneurs in some parts of the world.

Power relations between nations and tech giants

Casper Klynge is Denmark’s ambassador to Silicon Valley. Some countries maintain business and innovation centers in the Bay Area, but Klynge is the first ambassador dispatched to Silicon Valley to specifically represent the diplomatic interests of a nation.

The Danish government has sent him to deal with companies like Apple, Amazon, Google, and Facebook — which have amassed much of the world’s AI talent — as global superpowers. Klynge thinks more small countries should do the same so they can work together on common goals. In his two years in the position, Klynge said he’s learned that multilateral coalition building — banding together with other small nations — is part of his job.

Monopolistic businesses aren’t a novelty to governments, but Klynge says AI like autonomous driving and search have changed the game, making such tech companies more important to national interests than many other countries, and sparking a need for what Denmark calls techplomacy.

In an interview with VentureBeat last month, Klynge argued that tech giants are skewing the nature of international relations and creating a new reality in which countries must treat tech giants like global superpower nations.

“We cannot look at them anymore as being neutral platforms that are just neutral purveyors of whatever people want to do. I think we have to treat them in a more mature and responsible way, which also means that we are less naive, we are more balanced, and also making demands, holding them accountable,” he said. “My job is just a symptom of something more systematic that we are trying to do to get a more balanced, realistic view on technology companies and technology per se.”

What next?

Power is everywhere in the AI ethics debate, but that doesn’t mean we must remain powerless. As the racial literacy project illustrates, there’s another way.

It’s something Ruha Benjamin calls for when she says tech needs “socially just imaginaries,” and something Cathy O’Neil references in her book Weapons of Math Destruction.

“Big data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide,” she writes.

A skewed power dynamic has rendered words like “democratization” virtually meaningless, but putting AI in the hands of more people solving big problems could significantly change perceptions of AI and grow its positive influence on society.

There are already many examples of AI being used to improve human lives, of course. MIT researchers created an algorithm to get kids to school more quickly and saved a Boston school district $5 million a year in transportation costs. The FDNY and NYU are using AI to improve emergency response times by understanding the most efficient route to a scene — one of dozens of Google.org projects taking a data-driven approach to the creation of AI for the benefit of humanity. Humans are using AI to build more efficient greenhouses and increase crop yields, advances that could help avoid starvation in the decades ahead and feed the world as the global population swells to 10 billion. On and on the examples go.

But when we’re talking about technology that can predict the future, upend the economic and social order, keep people in prison, or make decisions about our health, right underneath the surface of an impressive technological advance will inevitably lie a power struggle.

The power dynamic is there in AI systems that return poorer performance for people of color in bail assessments, healthcare affecting millions, homeless services, and facial recognition.

It’s also there when EU AI experts urge nations to avoid mass surveillance and instead use AI to empower and augment people. It’s there in initiatives from Samsung and the United Nations to use AI to achieve sustainable development goals.

It’s there in open education projects like fast.ai and a Finnish initiative to educate a portion of the country’s population about the basics of AI.

It’s there in the vision that aggressive climate change goals can help recruit AI talent, as Digital Hub Denmark CEO Camilla Rygaard-Hjalsted recently suggested in an interview, or the vision that machine learning applied to climate change could be AI’s great moonshot.

It’s there in fledgling conversational AI projects to protect the children of military service members, detect when a gang shooting may take place, or field sexual health questions for teenage girls in Pakistan.

And it’s there in open source projects like Masakhane, which is working to create machine translation for more than 2,000 African languages. The project now counts 60 contributors from all corners of the continent to make AI that can preserve and translate these languages. Africa has the youngest population of any continent on Earth and will account for more than half of global population growth between now and 2050, according to United Nations estimates. Machine translation in Africa could be important to powering conversational AI, communication, and commerce online and in the real world.

For the past three years, Kathleen Siminyu has led the Women in Machine Learning and Data Science chapter in Nairobi, Kenya. “I see language as a barrier which, if eliminated, allows a lot of Africans to just be able to engage in the digital economy and eventually in the AI economy. So yeah, as people who are sitting here building for local languages, I feel like it’s our responsibility to just bring the people who are not in a digital age into the age of AI,” she told VentureBeat in a phone interview.

If you’re following only part of the AI ethics debate, it could be easy to conclude that making AI ethics part of engineering and design processes is some sort of politically correct, corporate social responsibility demand that can get in the way of genuine progress.

It’s not. AI ethics means making models the best way possible, with humans in mind and in the loop, and it’s indispensable to the future of the technology and the systems people choose to run the world.

These power dynamics seem most daunting when we have no other vision of the future, no alternative possibility than a jobless planet in a global surveillance state marching toward World War III.

In charting a path to a better world, it is imperative to recognize the power dynamics in play, because just as AI itself can be a tool or a weapon, it can empower or disadvantage individuals and society as a whole. It’s incumbent on startups, tech giants, and communities who want a better world to dream of what’s possible and share those aspirations.

AI is transforming society, and it can’t just be the privileged few who decide how that will happen or frame what that world looks like.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.