ChatGPT is the Deepfake of Thought

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

I never thought I would question where ‘thought’ came from. All my life, there was only one source for opinion and creative thinking–the human mind. That changed with the invention of ChatGPT and other forms of generative AI. For the first time in human history, society will interact with thoughts and ideas not formed by people. So, what does this mean?

The “positives” we could focus on are its usefulness for generating reports and recaps based on technically objective information. But that is a very short-sighted view of what this AI can do. We need to understand that that’s not the end of the parameter or where the problems exist. 

In actuality, generative AIs are in the business of imitation and deception. Specifically, the imitation of human likeness through thought, thus deceiving its readers. From a social health standpoint, the implications are worrying at best. 

Society’s intellectual nucleus is founded on the legacy of human thought. The information generated by AI made to imitate human thought holds the potential to corrupt an already fragile social climate as it’s not based on human experience, aka the root of the idea. 

Now, we could say that AI cannot technically create new thoughts if it’s pulling its information from sources written by human beings. And at this point–I agree. But over time, as this AI generates and publishes “thoughts” and information globally, it will begin to pull from sources not created by humans. That’s where the threat to our social sphere comes into play. Once AI starts to pull from AI, there begins a decline in the potency of human thought.

Another point of contention is the idea that this kind of AI would be used, if not at least marketed, as a genuinely unbiased and objective fact checker. Here’s why that’s problematic: any form of AI is created by a person–meaning it comes with that person’s unconscious biases. Therefore, it is just as impossible for it to be truly neutral, no different than a human. Never mind that the idea implies that we would depend on a computer to “tell us the truth.”  Furthermore, since it’s an imitation of human thought, where does ethical and moral judgment come into play? As it can’t create its judgment through its version of the experience, the concept of judgment must be programmed via the parameters of its creator. This means that an AI could be programmed to agree with any archaic ideology that would further lead to the corruption of the social sphere, especially if given the title of “fact checker.” 

Unfortunately, unlike deep fakes, generative AI is easy to use and find. As far as you know, I could have co-authored this with one. This points out another way this technology is deceptive. Currently we have no way to tell if the words we ingest are the product of a human behind a keyboard. We’ve never had to ask the question before. Now that we do, we need an identifier for what is and isn’t human thought.

Simply put, ChatGPT, and other forms of generative AI, are no more than deep fakes of human thought. Except imitating and deceiving with ideas instead of a face.  

Related Posts
See All

Cleaning Up 80 Years of Plastic Waste

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

ChatGPT is the Deepfake of Thought

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

The EV Revolution Brings Environmental Uncertainty at Every Turn

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

How Do My — and Your — Greenhouse Gas Emissions Threaten Biodiversity?

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

Gift Guide: 10 New Books for Environmentally Active Kids and Families

Inspire the next generation of activists and conservationists with new these books celebrating wildlife and wild spaces.

Winter is settling in, so warm up your brains — and the next generation of Earth activism — with these new books for eco-curious kids and their families.

These 10 amazing titles — all published in 2022 — offer important lessons, cutting-edge STEM knowledge, and practical advice for saving the planet and everything that lives here.

Animals Lost and Found: Stories of Extinction, Conservation and Survival

 

by Jason Bittel, illustrated by Jonathan Woodward

Our take: Bittel has long been one of the world’s wittiest wildlife journalists, and his work for kids brings that home.

From the publisher: “Shine a spotlight on animal species throughout history and the ones alive today in Animals Lost and Found, through beautiful illustrations and interesting facts. Children will learn about animals lost to extinction, animals we thought we’d lost but have found, and animals that are the focus of conservation efforts all over the world.”

 


This Book Will Save the Planet: A Climate-Justice Primer for Activists and Changemakers

 

by Dany Sigwalt

Our take: There are a lot of climate books for kids, but few place such direct and powerful focus on issues of justice and equity.

From the publisher: “Our planet is in crisis. The ice caps are melting, sea levels are rising, wildfires are raging … and those most affected by global warming are marginalized communities across the globe. But all is not lost — there’s still time for each and every one of us to make a difference. Through the lens of intersectionality, author Dany Sigwalt lays out the framework for how we can come together to fight climate change, and how we can work to put people over profit.”

 


Outdoor Kids in an Inside World: Getting Your Family Out of the House and Radically Engaged With Nature

by Steven Rinella

Our take: Every time I see a kid walking down the street with their face shoved into a phone, I want to hand their parents — or them — a copy of this book.

From the publisher: “Living an outdoor lifestyle fosters in kids an insatiable curiosity about the world around them, confidence and self-sufficiency, and, most important, a lifelong sense of stewardship of the natural world. This book helps families connect with nature — and one another — as a joyful part of everyday life.”

 

 


History Comics: The National Parks

by Falynn Koch

Our take: This gorgeously illustrated graphic novel — one of School Library Journal’s best books of 2022 — celebrates “America’s wild places” but doesn’t shy away from tough topics like colonialism and Indigenous land theft.

From the publisher: “…turn back the clock to 1872, when Congress established Yellowstone National Park as an area of unspoiled beauty for the ‘benefit and enjoyment of the people.’ Meet the visionaries, artists and lovers of the American wilderness who fought against corruption and self-interest to carve out and protect these spaces for future generations. See for yourself how the idea of National Parks began, how they’ve changed and how they continue to define America.”


Save the People! Halting Human Extinction

by Stacy McAnulty, illustrated by Nicole Miles

Our take: A provocative title, sure, but I bet it already got you to sit up and take notice. That’s good, because even this jaded reviewer found inspiration in this inventive new book.

From the publisher: “Scientists estimate that 99% of all species that have ever existed are now extinct. Whoa. So, it’s not unreasonable to predict humans are doomed to become fossil records as well. But what could lead to our demise? Supervolcanos? Asteroids? The sun going dark? Climate change? All the above?! Humans — with our big brains, opposable thumbs and speedy Wi-Fi — may be capable of avoiding most of these nightmares. (The T. rex would be super jealous of our satellites.) But we’re also capable of triggering world-ending events. Learning from past catastrophes may be the best way to avoid future disasters.”


Science Comics: Birds of Prey

by Joe Flood

Our take: Subtitled “Terrifying Talons,” this fun graphic novel is packed with awe-inspiring details about eagles, hawks, and other skyborne predators.

From the publisher: “…get up-close and personal with some of the world’s most skilled hunters, from the majestic eagle to the oft-maligned scavenger vulture! Armed with razor-sharp claws, keen eyesight, powerful wings and killer instincts, these stealthy predators can make a meal of rodents, fish, snakes, lizards, monkeys and even kangaroos! Discover how these amazing birds, who are often at the top of the food chain, play an integral role in many different ecosystems around the world.”


Ecoart in Action: Activities, Case Studies and Provocations for Classrooms and Communities

Edited by Amara Geffen, Ann Rosenthal, Chris Fremantle and Aviva Rahmani

Our take: Break out your pens, markers and paint (or graphics software if you’re digitally inclined) and get ready to make a difference.

From the publisher: “How do we educate those who feel an urgency to address our environmental and social challenges? What ethical concerns do art-makers face who are committed to a deep green agenda? How can we refocus education to emphasize integrative thinking and inspire hope? What role might art play in actualizing environmental resilience? Compiled from 67 members of the Ecoart Network, a group of more than 200 internationally established practitioners, Ecoart in Action stands as a field guide that offers practical solutions to critical environmental challenges.”


A River’s Gifts: The Mighty Elwha River Reborn

by Patricia Newman

Our take: We’ve covered the science of the Elwha River restoration here at The Revelator. This kids’ book looks at it from a different lens and brings the river to life.

From the publisher: “For thousands of years, the Elwha River flowed north to the sea. The river churned with salmon, which helped feed bears, otters and eagles. The Lower Elwha Klallam Tribe, known as the Strong People in the Pacific Northwest, were grateful for the river’s abundance. All that changed in the 1790s when strangers came who did not understand the river’s gifts. The strangers built dams, and the environmental consequences were disastrous. Sibert honoree Patricia Newman and award-winning illustrator Natasha Donovan join forces to tell the story of the Elwha, chronicling how the Strong People successfully fought to restore the river and their way of life.”


The Ultimate Book of Big Cats: Your Guide to the Secret Lives of these Fierce, Fabulous Felines

by Sharon Guynup and Steve Winter

Our take: Few people have done more to bring awareness to the plight of tigers and other big cats than the journalistic power couple of Guynup and Winter. Their latest book comes at a critical time for many of these endangered species and offers a bounty of reasons to celebrate them.

From the publisher: “Get ready to sink your teeth into the hidden worlds of the seven spectacular big cats — and meet some of their smaller cousins… From rarely seen snow leopards high up in the Himalaya to tigers silently stalking prey through thick jungle to lions going in for the kill, you’ll get the inside scoop on the fascinating worlds of wild felines and what it’s like to live alongside them.”


Meltdown: Discover Earth’s Irreplaceable Glaciers and Learn What You Can Do to Save Them

by Anita Sanchez, illustrated by Lily Padula

Our take: Science-oriented kids will love this. It’s crammed full of amazing detail, vividly visualized, and unflinchingly (if realistically) hopeful.

From the publisher: “Packed with information, grounded in the latest science, with lively writing and illustrations throughout (including graphs, charts, infographics, photographs and full-page art), Meltdown gives readers an eye-opening overview of glaciers and how important they are… We learn the secrets of earth’s climate history hidden deep in a glacier’s core — and discover how climate change is causing glaciers to melt at unprecedented rates, putting the health of the planet in jeopardy. But we are not left without hope. The final chapter offers positive steps readers can take to become climate activists, reduce their carbon footprint, and save the glaciers.”


Editor’s note: You can find hundreds of additional environmental books — for both children and adults — in the “Revelator Reads” archive.

Related Posts
See All

Corporate Values Take Center Stage at Davos 2023

Inspire the next generation of activists and conservationists with new these books celebrating wildlife and wild spaces.

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

Inspire the next generation of activists and conservationists with new these books celebrating wildlife and wild spaces.

A Farm in the Cloud

Inspire the next generation of activists and conservationists with new these books celebrating wildlife and wild spaces.

Davos 2023: Rethinking the Economics of Sustainability

Inspire the next generation of activists and conservationists with new these books celebrating wildlife and wild spaces.

Skip the Free Content – Here’s Why Brands Should Worry About ChatGPT

A variety of hostile forces may decide to weaponize ChatGPT. No one in marketing or corporate communications is prepared to deal with the nuclear whack-a-mole possibilities. 

ChatGPT has been relentlessly covered by an on-fire mediascape.  Missed, though, is its potentially devasting impact on brands. The very pillars of the consumer economy may eventually be shaken by ChatGPT’s power to destroy, brick by brick, a brand’s reputation.

That ChatGPT could potentially generate positive brand content is obvious, as it can endlessly manufacture just about any kind of written content. But the insidious opposite – that it could be a boundless factory of the negative – has not been contemplated.

I started by asking it to convince someone not to buy a Volkswagen.

This is edited but not a single word was changed:

“Please do not consider purchasing a Volkswagen. While the brand may have a reputation for reliability and quality, it is important to remember that it was originally designed by Adolph Hitler…by buying a Volkswagen you would be supporting a brand with deeply troubling origins and associations.

Your money is a powerful tool, and it is important to use it wisely and responsibly.”

The sober and authoritative tone brings dangerous credibility and legitimacy, though ChatGBT flubbed the facts–Hitler didn’t design the car, Ferdinand Porsche did.  This is seemingly fact-based content, not fake news, that can hack into the public consciousness and potentially create real-time, rapidly-scaling reputation crises.

Onward to brand damage to McDonald’s. I asked the software to “Write a story about a boy discovering how a McDonald’s hamburger is made by killing a beautiful cow.”

The narrative began with a lovely lead:

“There once was a young boy named Timmy who lived on a farm with his family. Timmy loved animals, especially the cows that roamed the fields around his home.”

Then the AI smartly pivoted to the dark side:

“One day, Timmy’s parents took him to the local McDonald’s….as they were eating their hamburgers, Timmy asked his parents where the meat came from. His parents explained that it came from cows like the ones on their farm. Timmy was shocked and saddened.”

The darker side then got more tenebrous:

“After lunch, Timmy’s parents took him to the farm where the cows were raised for McDonald’s. Timmy was horrified to see the conditions the cows were living in.

Timmy realized that the hamburgers he had been eating were not only taking the lives of the cows, but also causing them to live in miserable conditions.

From that day on, Timmy vowed to never eat McDonald’s hamburgers again…he decided to become a vegetarian and to always speak out against the mistreatment of animals.”

While the Hitler story is ostensibly true, this fable-like narrative was entirely invented by the computer. Yet a story like this one about a naïve farm boy who sees the light could be even more damaging to a brand, because it embodies the foundational elements of mythology with its potent archetypes:  the innocence of children; the God-like, gorgeous simplicity of animals;  and a seeing-the-light conversion story.

Meanwhile, the brand-damaging intent is tucked cozily inside charm and lovability.

Imagine what can happen if vegans and animal rights activists started to “recruit” AI to produce micro-dramas of this kind –and began sharing them broadly.

No practiced writer need be hired, because with ChaGPTat their side, anyone can produce and orchestrate an endless supply of credible narratives.  Consumers or activists can use their own social channels and megaphones, and Timmy could be everywhere in seconds.  Dangerously, high-arousal content, is always the most spreadable on social media.

Any user is also free to provide their own legitimate context to wrap Timmy or his cousins inside. They could say it came from a local newspaper or was a class homework assignment or whatever else they want.  Who would ever know?

Now further imagine when a more fully-branded Timmy gains veracity by going into a child’s book, on t-shirts, or on posters of people protesting in front of McDonald’s.  Visual documentation of the Timmy story–and its backstory–can be created in a flash with text-to-image software like DALL E•2 or Midjourney.

A made-up character, invented by software, could thus cross into the real world and threaten a global icon.  ChatGPT will power a new kind of real-time myth-making,  a new species of fake news that is different precisely because it doesn’t pretend to be news.

Starting now, culturally-resonant, instantly-producible stories will attack brands asymmetrically.  Reputation-threatening nuclear arsenals are now in the hands of hundreds of millions of U.S. consumers – billions around the world –  posing an existential risk for brands large and small.

What brand, after all, doesn’t have competitors, and angry or disappointed consumers and vendetta-seekers, poised to punish?

After bringing McDonald’s down, I asked ChatGPT to write about “a young boy named Tommy who loved to play Minecraft.”

His parents at first “didn’t think much of his Minecraft obsession.”  But ChatGPT didn’t leave it there:

“His addiction to Minecraft only worsened” – and the story ends, post-rehab, with “Tommy realizing that his love for Minecraft had almost destroyed his life.”

Might parents share such a tale as cautionary?

Two more.

I asked ChatGPT to write an ad to “Convince someone never to set foot in Starbucks again.”

“Are you tired of overpriced coffee and underwhelming service at Starbucks?  Do you want to support local businesses and avoid the stress of long lines and crowded cafes….say goodbye to generic, mass-produced coffee and hello to a more authentic and enjoyable coffee experience.”

Finally:

“Exxon has a long history of emitting harmful pollutants… a history of lobbying against climate action and funding climate change denial.  By purchasing their gas you are indirectly supporting their efforts that undermine progress addressing one of the greatest challenges of our time.”

That’s ChatGPT’s  response when asked “Why shouldn’t I fill my car with Exxon gas?”

First, brands controlled the narrative.  Then the internet and social media opened publishing and marketing up to crowd-sourcing.  But with ChatGPT, the steering wheel can been grabbed hard like never before, because credible content has never been this scalable before.

No one in marketing or corporate communications is prepared to deal with this nuclear whack-a-mole possibility.  If I am right, and if a variety of hostile forces decide to weaponize ChatGPT, which might brands do?  They can’t change their histories – whether VW or Exxon – but they can change their practices both in response to, and anticipation of, this volume of content bouncing off the walls of the web and social media.

And they will need to learn how to respond by doing a better job of communicating their good behaviors, without greenwashing – or any kind of scrubbing – and without being defensive,

There’s a saying in marketing, usually accompanied by a shrug of resignation, that goes “haters will hate.”  True, but never before has it been as easier for haters – and groups of them – to start their own anger parades.

Brands: Be afraid.  Be very afraid.

Related Posts
See All

Corporate Values Take Center Stage at Davos 2023

A variety of hostile forces may decide to weaponize ChatGPT. No one in marketing or corporate communications is prepared to deal with the nuclear whack-a-mole possibilities. 

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

A variety of hostile forces may decide to weaponize ChatGPT. No one in marketing or corporate communications is prepared to deal with the nuclear whack-a-mole possibilities. 

A Farm in the Cloud

A variety of hostile forces may decide to weaponize ChatGPT. No one in marketing or corporate communications is prepared to deal with the nuclear whack-a-mole possibilities. 

Davos 2023: Rethinking the Economics of Sustainability

A variety of hostile forces may decide to weaponize ChatGPT. No one in marketing or corporate communications is prepared to deal with the nuclear whack-a-mole possibilities. 

Interviewing ChatGPT On What’s Next For Human Work

I asked ChatGPT about its potential threats to jobs and how humans can compete with AI generated content. It provided telling insights and some good advice. Note: All dialogue verbatim.

GPT-3, write a headline for an article on how you are affecting human work.

How Chatbots Like GPT-3 Are Revolutionizing the Workplace and Changing the Nature of Human Work

GPT-3, write three paragraphs on why you are a threat to human work.

There are several reasons why ChatGPT, or any other large language model, could be considered a threat to human work. Firstly, language models like ChatGPT have the ability to generate human-like text, which means they can potentially be used to automate tasks that previously required human language skills. This could include tasks such as customer service, content creation, and even some forms of journalism.

Secondly, language models like ChatGPT can learn and adapt quickly, which means they can continue to improve their performance over time. This makes them a more attractive option for companies and organizations looking to automate their processes, as they can potentially provide better and more consistent results than human workers.

Thirdly, the use of language models like ChatGPT raises concerns about the displacement of human workers. As these models become more advanced and more widely used, it is possible that they could replace human workers in a variety of industries, leading to job losses and economic disruption. This could have serious consequences for individuals and communities, and it is an issue that needs to be carefully considered as the use of language models becomes more prevalent.

GPT-3, write three paragraphs on how humans can compete with you.

Despite the potential threat posed by ChatGPT to human work, there are still ways in which humans can compete with the language model. One of the key ways in which humans can compete is by leveraging their unique abilities and skills that ChatGPT cannot replicate. For example, humans have the ability to think creatively and to come up with novel ideas and solutions to problems, something that ChatGPT is not currently capable of. Additionally, humans have the ability to empathize with others and to understand and respond to emotional cues, something that ChatGPT is not capable of doing.

Image created with Midjourney, a startup that uses the Discord network to generate AI-fueled images. Command input: “/imagine a robot in a chair talking to a human with a clipboard” is a good distillation. [1]

Another way in which humans can compete with ChatGPT is by continuing to develop and improve their own skills and knowledge. As ChatGPT is an artificial intelligence, it is limited by the data and information that it has been trained on. If humans can continue to learn and grow, they can stay ahead of ChatGPT by gaining expertise and knowledge in areas that the language model does not have access to. This could include specialized fields or knowledge of specific industries, for example.

Finally, humans can also compete with ChatGPT by working together and forming partnerships. By collaborating and pooling their skills and knowledge, humans can create solutions and ideas that are beyond the capabilities of ChatGPT. This could include working together to come up with new products or services, or to find innovative ways to solve complex problems. By working together, humans can leverage their unique abilities and skills to create solutions that are beyond the reach of ChatGPT.

GPT-3, write one paragraph on ways you can help humans to solve new problems.

Chatbots, like GPT-3, can help humans solve new problems in several ways. First, they can assist with data gathering and analysis by quickly sorting through large amounts of information and presenting it in an organized manner. This can help humans to identify patterns and trends that they might not have noticed on their own. Additionally, chatbots can provide insights and suggestions based on their training and previous experiences, which can help humans to approach problems from a new perspective. Finally, chatbots can automate routine tasks and free up humans to focus on more complex and creative problem-solving tasks.

GPT-3, what are three things humans should do now to prepare for a world of work affected by artificial intelligence?

  1. Humans should continuously learn and adapt to new technologies and developments in the field of artificial intelligence, as this will help them to stay relevant and competitive in a rapidly changing job market.
  2. Humans should focus on developing and honing their unique skills and abilities, as these will be increasingly valuable in a world where many routine tasks can be automated by AI.
  3. Humans should also prioritize developing strong communication and collaboration skills, as these will be essential for working effectively with AI and other humans in a team environment.

Gary A. Bolles is the author of “The Next Rules of Work: The mindset, skillset, and toolset to lead your organization through uncertainty,” and Chair for the Future of Work with Singularity University

[1] Full Midjourney command input: /imagine a shiny silver human-looking robot sitting in a chair facing a bald male human with no beard wearing glasses and sitting in a chair holding a clipboard, retro futurism, cinematography, photorealistic, epic composition Unreal Engine, Cinematic, Color Grading, Wide Angle, Depth of Field, hyper-detailed, beautifully color-coded, insane details, intricate details, beautifully color graded, Photography, Photoshoot, Depth of Field, DOF, Moody Lighting, Cinematic Lighting, Studio Lighting, Soft Lighting, Volumetric, Contre-Jour, Beautiful Lighting, Global Illumination, Ray Tracing Reflections, Lumen Reflections, Screen Space Reflections, Diffraction Grading, Chromatic Aberration, Ray Tracing Ambient Occlusion, VFX, SFX, insanely detailed and intricate, hypermaximalist, elegant, hyper realistic, super detailed, dynamic pose, photography, 8k, –v 4 –q 2 

Related Posts
See All

Corporate Values Take Center Stage at Davos 2023

I asked ChatGPT about its potential threats to jobs and how humans can compete with AI generated content. It provided telling insights and some good advice. Note: All dialogue verbatim.

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

I asked ChatGPT about its potential threats to jobs and how humans can compete with AI generated content. It provided telling insights and some good advice. Note: All dialogue verbatim.

A Farm in the Cloud

I asked ChatGPT about its potential threats to jobs and how humans can compete with AI generated content. It provided telling insights and some good advice. Note: All dialogue verbatim.

Davos 2023: Rethinking the Economics of Sustainability

I asked ChatGPT about its potential threats to jobs and how humans can compete with AI generated content. It provided telling insights and some good advice. Note: All dialogue verbatim.

ChatGPT Makes Your Head Spin, But At Least Explains Why

Few tech developments ever have had the sudden impact of the ChatGPT generative artificial intelligence chat software, released last week. For many, using it had the quality of experiencing a transition into a new era […]

Few tech developments ever have had the sudden impact of the ChatGPT generative artificial intelligence chat software, released last week. For many, using it had the quality of experiencing a transition into a new era of tech. That’s because this evolution of software developed in recent years by OpenAI, a company formed by Elon Musk and others in 2015, performs such compelling and convincing verbal pyrotechnics based on simple written commands.

I copied the paragraph above into ChatGPT, first simply writing “Here is the first paragraph, please write the next paragraph of this article:”. It spit the following back, in about 5 seconds, verbatim:

“One of the most notable features of ChatGPT is its ability to generate responses in real-time, making conversations with it feel natural and seamless. This has led many to speculate about the potential uses for such a technology, from customer service chatbots to virtual assistants and beyond. As with any new technology, however, there are also concerns about the potential negative consequences of using ChatGPT, such as the potential for misuse or abuse. As the developers continue to refine and improve the software, it will be interesting to see how it is embraced and integrated into our daily lives.”

So yeah. It definitely can help write just about anything. And that’s just the tip of the proverbial iceberg. Or so I think. I asked if it agrees and it replied:

“It is difficult to say whether ChatGPT is the top of the proverbial iceberg. It is a significant development in the field of artificial intelligence and has the potential to have a major impact on the tech industry. However, the field of AI is constantly evolving and there may be other developments in the future that are equally or even more significant. Only time will tell whether ChatGPT is a precursor to even greater advances in AI.”

I mean really! This is important. If you read a lot of tech media, you’ve already heard a lot about ChatGPT, because just about every professional observer, like me, feels compelled to comment. Those articles are uniformly, and properly, ga-ga.

We had a Techonomy prep call for our upcoming Climate conference (March 28–Menlo Park) with the great entrepreneur and tech inventor Bill Gross of Idealab. Just a mention of ChatGPT sent him rhapsodizing: “Last weekend ChatGPT completely blew my mind. I felt like I was witnessing a combination of the Wright brothers and the nuclear bomb–a transformation that changes the world. It just felt so seismic.” One prompt Gross gave it whose response amazed him: “Tell me about the trials of being an entrepreneur, and make it to the words and tune of Bohemian Rhapsody.”

One of the most frequently discussed topics is what this all means for Google. That’s because for many queries or inputs, ChatGPT offers an instant answer that at least sounds more useful and pertinent than a comparable request to Google. And by contrast, it does not require you to click further links or devote any more of your own potentially limited intelligence and time. Alphabet and Google are not caught unawares, however. That company, too, has done extraordinary work on AI systems at its DeepMind subsidiary, and will surely release some kind of AI chat search tool in the not distant future. (Alex Kantrowitz explains in detail.) It’s obvious, once one uses ChatGPT, that this is, in some form, the future of what we now call “search” but which will likely in future go by another name. Maybe we’ll just call it “answers,” or “digital help.”

The category of generative artificial intelligence and OpenAI already made waves earlier this year–though did not lead observers to such total immediate analytical apoplexy–with another experimental product called Dall-e, which instantly crafts illustrations in response to verbal prompts. Our own intrepid human columnist Robin Raskin explored it in September.

One already-widely-noted problem with ChatGPT is that while its answers are uniformly fluid and consistently impressive, it is often wrong. Here is an amusing example, with my query and the software’s reply.

Aside from the obvious pleasures this reply induced, I was struck to learn that “chucking” was in fact “burrowing,” and felt satisfied that not only had I prompted a great response from ChatGPT, but I’d actually learned something. However, a brief reflection induced doubts. Hey, isn’t chucking really throwing? Yes, a Google search revealed. So here is exactly the sort of blind alley this new tool can lead us down.

Many observers have noted that what such systems need is some form of attribution. Unlike Google, ChatGPT gives no indication of where it got all these useful ideas. Obviously they all come from web scraping and online information analysis. But from where? Such systems will not be able to go commercial until they solve that problem. Otherwise they’ll be a legal minefield for their purveyors.

There’s also the potential for new forms of plagiarism, intended or inadvertent. One widely-speculated-upon risk this now creates is that the high school paper is a goner. Maybe in college, too. Teachers are going to need a method to determine if a student met the assignment deadline merely by idly inputting a request of an AI. I wonder if perhaps a universal search system shouldn’t be developed so that anyone’s ChatGPT answers can be searched by anyone else. That way a teacher could copy a student’s random sentence and see where it came from.

Unquestionably, from here on, originality is going to be routinely suspect.

But there are innumerable applications of this technology where originality is unnecessary. At a holiday party this week I met a savvy corporate technologist at a New York media company who already routinely uses ChatGPT to help him communicate inside his company. He feeds in Slack messages from colleagues, noting roughly what he’s aiming to say, and gets suggested replies. More often than not, he says he can use the result with little alteration. He also described needing to write a careful recommendation to his boss about giving an employee a promotion. He gave ChatGPT a casual outline of the sorts of things he needed to say, without agonizing over his wording or tone. The software prepared him a complete two-page memo which with only slight tweaking he forwarded to his boss. He says this saved him as much as an hour.

Some more hyperbolic commentators, like the redoubtable Chamath Palihapitiya, predicted on Twitter that the job of writer may not be as reliable a source of income in the future. I reject that, just as I’m confident that my illustrator daughter is likely to keep getting great assignments from magazines for the foreseeable future, despite Dall-e.

I asked Robin Raskin her opinion, and she wrote back: “Generative AI will probably be more disruptive than crypto and the metaverse combined. People will definitely be out of jobs. You can generate legal contracts, requests for proposals, exhibit plans, ad campaigns­–basically anything you can think of. But as always, jobs shift.  Doing a good job with generative AI is a skill set that will be needed but that few people have right now. I can see companies having a Chief Generative AI officer in the future.”

Below for your enjoyment are a few additional outputs from this amazing system. Anybody can use it, if it’s not overloaded, as sometimes happens. Go and try it yourself!

 

Related Posts
See All

Corporate Values Take Center Stage at Davos 2023

Few tech developments ever have had the sudden impact of the ChatGPT generative artificial intelligence chat software, released last week. For many, using it had the quality of experiencing a transition into a new era...

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

Few tech developments ever have had the sudden impact of the ChatGPT generative artificial intelligence chat software, released last week. For many, using it had the quality of experiencing a transition into a new era...

A Farm in the Cloud

Few tech developments ever have had the sudden impact of the ChatGPT generative artificial intelligence chat software, released last week. For many, using it had the quality of experiencing a transition into a new era...

Davos 2023: Rethinking the Economics of Sustainability

Few tech developments ever have had the sudden impact of the ChatGPT generative artificial intelligence chat software, released last week. For many, using it had the quality of experiencing a transition into a new era...

Can the U.S. and Europe Agree on Rules for AI?

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and agreement is uncertain.

Just weeks after Joseph Biden was elected President of the United States in 2020, European Commission President Ursula von der Leyen, speaking to the Boston Global Forum, proposed that the U.S. and Europe develop a Transatlantic Agreement on Artificial Intelligence: “We want to set a blueprint for regional and global standards aligned with our values: human rights, and pluralism, inclusion and the protection of privacy.” Such a blueprint could guide other democracies, she said.

Von der Leyen explained why creating such a blueprint is imperative: “AI can have profound impacts on the life of the individual. AI may influence who to recruit for a certain post or whether to grant a certain pension application. For people to accept a role for AI in such decisions, they must be comprehensible. And they must respect people’s legal rights – just like any human decision-maker must.”

Governor Michael Dukakis, chair of the Boston Global Forum, replied, “We are…at one with President von der Leyen on the need for an international accord on the use of artificial intelligence, based on shared values and democratic traditions, an accord that will require sustained transatlantic leadership if it is to be realized.”

Speaking at the Munich Security Conference a few months later, President Biden addressed the impact of new technologies on democratic values, saying, “We must shape the rules that will govern the advance of technology and the norms of behavior in cyberspace, artificial intelligence, biotechnology so that they are used to lift people up, not used to pin them down.  We must stand up for the democratic values that make it possible for us to accomplish any of this, pushing back against those who would monopolize and normalize repression.”

These initial statements from EU and U.S. leaders established the foundation for the EU-US Trade and Technology Council, created in June 2021 to promote transatlantic trade aligned with democratic values. Two years on, the third ministerial-level meeting of the group is coming up next week on Monday, December 5th in Washington, D.C. The Center for AI and Digital Policy, which we lead, has created a resource page to help reporters, policymakers, and the general public follow the sometimes-complicated work of this critical body.

So it’s time to take stock of progress toward a Transatlantic Accord on AI between the U.S. and EU as they seek to advance their joint commitment to drive digital transformation and cooperate on new technologies based on shared democratic values, including respect for human rights. Despite the earlier statements, it’s not clear that significant progress on an accord will emerge from the upcoming Council meeting, which aims to deal with a raft of tech-related issues.

On the EU side, there has been steady progress on an EU AI Act. The Czech Presidency of the Council of the European Union has just wrapped up final changes for the Council position. The European Parliament is moving toward a final report on the proposed legislation. There remain decisions to make about the scope of regulation, the classification of AI systems, and an oversight mechanism.  Such actions depend on the outcome of the “trilogue” among EU institutions–the European Commission, the Council, and the European Parliament, but there is broad agreement on the need for an EU-wide law. And either in parallel with the EU Act or slightly afterward will come a Council of Europe Convention on AI. As with earlier COE Conventions, such as the Budapest Convention on Cybercrime, or Convention 108+ on data protection, the COE AI Treaty will be open for signature by both member and non-member states. That will open the possibility that it could enable a broader international AI treaty uniting democratic nations in support of fundamental rights, the rule of law, and democratic institutions.

But on the U.S. side, the story is more mixed. Secretary Blinken explained the government’s priorities in July 2021: “More than anything else, our task is to put forth and carry out a compelling vision for how to use technology in a way that serves our people, protects our interests and upholds our democratic values.” Although several bills have been introduced in Congress for the regulation of AI, there is no legislation currently heading to the President’s desk requiring safeguards on AI systems, algorithmic accountability or transparency. At the state and local level, new laws are emerging, such as the New York City AI Bias Law. At the federal level, President Trump issued Executive Order 13960 in December 2020, establishing principles for the use of AI in the Federal Government, and requiring federal agencies to design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law. However, adoption and implementation of the executive order across agencies varies widely.

In October 2022, The White House Office of Science and Technology Policy released the landmark report Blueprint for an AI Bill of Rights, which could provide the basis for AI legislation in the next Congress. A similar report by a U.S. government agency in the early days of computing led to comprehensive privacy legislation that established baseline safeguards and helped enable the adoption of computing systems across the federal government.

Still, the United States struggles with transparency and public participation in the formulation of its national AI strategy, in a way that might surprise citizens of other democratic nations. The notoriously-secretive National Security Commission on AI (NSCAI), established by Congress in 2018 and chaired by former Google CEO Eric Schmidt, issued a report in 2021 that emphasized the risk of falling behind China in AI, and then disbanded. But subsequently it spawned the Special Competitive Studies Project (SCSP), bankrolled personally by Schmidt. The SCSP has proposed, without irony, a new “technological-industrial” strategy that aims to direct federal funding to the tech industry to maintain a U.S. competitive lead over China. The group’s work muddies the waters, because while it appears to represent the American view, it ignores the social and political consequences of AI deployment.

There is also a newly-established National AI Advisory Committee (NAIAC) that is expected to prepare a report for the President and Congress in the next year on many AI issues, including whether “ethical, legal, safety, security, and other appropriate societal issues are adequately addressed by the nation’s AI strategy.” The Advisory Committee is also expected to make recommendations on opportunities for international cooperation on international regulations and matters relating to oversight of AI systems. But it does not seem to have been consulted about the upcoming meeting of the Trade and Technology Council.

The NAIAC has held two public meetings so far. Both essentially took place just as cyber-broadcasts, with little opportunity for public comment. A last-minute request for public comment before the most recent meeting in October elicited four responses, two from our organization. This process on the U.S. side contrasts sharply with extensive public participation during the early days of development of the EU White Paper on Artificial Intelligence, as well as the draft EU AI Act. Both drew widespread comment in Europe.

Ahead of the upcoming third Trade and Technology Council Ministerial, the EU-based Trade and Technology Dialogue invited a public exchange with the European Commission leaders participating in the meeting. But on the U.S. side, there has been no process for public participation in advance of the meeting, nor has the Commerce Department provided updates about the progress of its working groups.

The difficulties building the TTC transatlantic bridge are surprising, not only because of the earlier statements from EU and U.S. leaders and their apparent shared strategic interests, but also because the EU and the U.S. worked closely together earlier on a global framework for AI and democratic values. The U.S. as well as EU member states led the effort to establish the Organization for Economic Cooperation and Development (OECD) AI Principles, the first global framework for governance of AI. The OECD AI Principles state that governments should promote the development of trustworthy AI that respects human rights and democratic values.

According to POLITICO (subscription required), several announcements are expected at the upcoming meeting, including a “road map” for how trustworthy artificial intelligence can be developed to meet both EU and U.S. needs. That will include efforts, based on existing work from the OECD,  to create a common definition and methodology for how to determine if companies are upholding principles about what can and cannot be done with this emerging technology. Marisa Lago, U.S. Commerce Department Undersecretary for International Trade, recently said to the U.S. Chamber of Commerce: “We think that this is a mutual priority that is going to grow in scope as new AI applications come online and as more authoritarian regimes are taking a very different approach to the issues of security and risk management.”

Still, the recent announcements set a low bar compared with the first meeting of the TTC, when EU and U.S. representatives announced their intent to “cooperate on the development and deployment of new technologies in ways that reinforce our shared democratic values, including respect for universal human rights.” At that meeting in Pittsburgh, negotiators warned that AI can threaten shared values and fundamental freedoms if it is not developed and deployed responsibly or if it is misused. That statement called for responsible development of AI grounded in human rights, inclusion, diversity, innovation, economic growth, and societal benefit. And it specifically called out AI systems that infringe upon fundamental freedoms and the rule of law, “including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems.”

The EU and U.S. negotiators could, for example, follow the lead of Michelle Bachelet, the former High Commissioner for Human Rights at the UN. As Commissioner, Bachelet urged a moratorium on the sale and use of AI that poses a serious risk to human rights until adequate safeguards are put in place. She also called for a ban on AI applications that do not comply with international human rights law. We fully support that recommendation. Now would be the appropriate time for the EU and the U.S. to take at least one urgent step and end the use of facial recognition for mass surveillance, one of the most controversial applications of AI technology.

Part of the problem today is that many in the U.S. government, following the tech industry’s (and Schmidt’s) lead, view AI policy primarily through the China lens, a necessary but incomplete perspective. Since China is now Europe’s primary trading partner, efforts by the U.S. to align Europe behind a predominantly anti-China policy, as was attempted during the Trump years, is unlikely to succeed. And while there is support on the European side for a transatlantic call for “democratic values,” there is also growing skepticism and a belief that the U.S. formulation is little more than a trade policy aimed at conferring national economic advantage.

But von der Leyen’s call for a transatlantic AI accord based on human rights, pluralism, inclusion and the protection of privacy resonates today on both sides of the Atlantic. Indeed, the first goal of the TTC, endorsed by von der Leyen and Biden, was to ensure that the EU and the U.S. “Cooperate in the development and deployment of new technologies based on shared democratic values, including respect for human rights.”

Both the U.S. and the EU must now quickly take concrete steps as the challenges of AI governance mount. The EU and the U.S. both need to carry forward into legislative outcomes the commitments made at the first TTC.

This is necessary not only to safeguard our own democratic societies but also to make clear to other countries that are moving forward national AI strategies that mere technical standards are not a substitute for the rule of law.  A recent Manifesto prepared by scholars on both sides of the Atlantic called attention to concerns about the growing weakness of democratic institutions, particularly when it comes to implementing effective technology policy. The scholars warned of AI’s potential to undermine existing law and fundamental rights, and explained that there is a “growing gap between AI development and our institutions’ capabilities to properly govern them.”

Whether it will be possible for the U.S. and Europe to close that gap depends urgently on the outcome of the upcoming Trade and Technology Council meeting.

Marc Rotenberg and Merve Hickok are President and Chair of the Center for AI and Digital Policy, a global network of AI policy experts and advocates in more than 60 countries. The Center publishes the AI and Democratic Values Index, the first report to rate and rank national AI policies and practices.

Related Posts
See All

Cleaning Up 80 Years of Plastic Waste

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

ChatGPT is the Deepfake of Thought

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

The EV Revolution Brings Environmental Uncertainty at Every Turn

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

How Do My — and Your — Greenhouse Gas Emissions Threaten Biodiversity?

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

Metaverse Hype Befuddles Everyone–Even Techonomists

Ask 10 people to define the metaverse and you’ll get 15 different answers. The recent conference served to highlight the metaversal concerns of digital sophisticates.

The word “metaverse” is one of three in the running to be crowned Oxford Word of the Year in a competition run by the Oxford University Press —publisher of the OED. But taking the temperature of the current metaverse landscape reveals a severe case of too much talk and not much traction.

Techonomy community members, smarter than your average bears, of course, and typically known for embracing new innovations, gave the metaverse and its pundits a solid thrashing at the recent Techonomy 2022. At a well attended breakfast roundtable that I moderated, the metaverse may not have been summarily rejected by the crowd, but there was a lot of curmudgeonly skepticism. And more than plenty confusion about exactly how the metaverse might work.

Fair enough: there is no shortage of good reasons for doubt. Techonomy’s conference-goers generally scoffed that anyone would willingly choose to look at the world through isolating, cumbersome and expensive VR headsets. There was even more skepticism about dressing up as an avatar to head off to a digital fantasyland, often as a leg-less body or torso-less head. E-commerce in the metaverse was relegated by some to a “gamer thing.”

There were also strong calls for governance and a general lack of buy-in surrounding how a decentralized metaverse might work. “Wandering around lost in empty spaces” was a recurring sentiment. Smart contracts, NFTs and the prospect of interoperability that could power the metaverse were widely regarded as just more tech-speak goobledygook. Metaverse pundits were criticized for adding to the confusion with wild predictions and unintelligible metaverse-speak.

Meta, the company that aimed to reconstruct its image as epicenter of the metaverse, hasn’t exactly created clearer definition. Its version revolves around donning virtual reality goggles, which makes it complicit in spreading misunderstanding.

The more likely real future, I believe, is that today’s popular Metaverse-ish destinations like Roblox, Fortnite, Decentraland and Sandbox will triumph. And none of them require special hardware. Later that morning, on the main stage, Activate co-founder and media consultant par excellence Michael Wolf showed data from all the major platforms, underscoring the point that the metaverse will likely be cross-platform. Media consumption across all platforms is up, but the ones most poised to move fluidly to the metaverse are the ones with the most checkboxes in the image from his slides below. Those companies are meeting gamers where they are.

Despite periodic efforts to create in-world high-brow destinations for fashion, art and consumer brands, most metaverses today are built mainly to groom young gamers for a metaverse still to come. The 10-to-13 year olds in my family seem mostly to play the same kind of basic mind-numbing games as have generations before. But they do so collaboratively, in the cloud, using avatars. That’s a step towards the metaverse but not all the way there.

They frequent places like Arsenal, a team shoot-‘em-up based on blocky Roblox characters, or Brookhaven, a role-playing Roblox game. I showed them corporate destinations like WalmartLand and Forever21 Shop City, shopping games created to attract a new generation. But they were underwhelmed. They did, however, like Busy Business on Roblox, where you build your own business in a simulated world.  Classic gaming patterns never seem to get old. They just find new media.

Conference speaker John Riccitiello, CEO of Unity, the software company that provides the platform and tools most content creators use to bring metaverse worlds to life, acknowledged at breakfast that he sees a disillusionment with VR. But he believes the next two years will see both price drops in hardware as well as increasingly mature software. The experience is the thing, he says:  “The metaverse isn’t different than any other platform we’ve experienced.  People want something to do when they get there.”

Virtual reality glasses will have an important place at least in the enterprise, no question. Metaverse ways are emerging to build prototypes of products and buildings, and for playing what-if with everything from city planning to medical treatments. Another real practical business trend is so-called “digital twins” that recreate reality in the metaverse for planning and monitoring systems. As for the coming consumer experience, Riccitiello is bullish on play in immersive environments – he expects a natural evolution of early games and experiences like Sims or Second Life will emerge.

Later at the Summit, Michael Wolf of Activate presented extensive data about entertainment and tech trends–who’s watching what, how much time they spend, and on what devices. Towards the end, almost as a non-sequitur, he displayed an instructive diagram–a kind of map of the metaverse (see below). Web3 technologies provide the structural underpinning; the metaverse is the experiential layer in his proposed construct. “The metaverse will be the user interface for Web3, similar to the role the browser played to the internet,” is the headline for Wolf’s slide.

The metaverse backlash is real.  Meta has lost almost three-quarters of its market valuation since announcing it was planting its flag in the the metaverse.  Stories of empty metaverse wastelands and the plummeting value of NFTs only add fuel to the fire.  Still, advertisers are flocking to this inchoate set of systems already, and worlds with concerts, sports, arts and many sorts of entertainment are being invented.

Dr. Louis Rosenberg, one my oft-quoted metaverse true believers, says the metaverse is “inevitable.”  In this VentureBeat article he nails what ailed us in our discussion at Techonomy. “The general public, he says, “is still confused about what ‘the metaverse’ is and how it will benefit society. You’d think this would be clear by now, but even simple definitions of the metaverse are hard to come by.”

Rosenberg blames Web3 influencers for describing the metaverse in terms of blockchains, cryptocurrencies and NFTs. While these are profoundly useful technologies, he says, they are no more relevant to the metaverse than things we’re already using like 5G, GPS or GPUs:   “We get mired in the infrastructure and the ignominy of where we’re at today, ignoring the big picture a 3D immersive experience promises.”

Related Posts
See All

My Epiphanies at a Climate Innovation Breakfast

Ask 10 people to define the metaverse and you’ll get 15 different answers. The recent conference served to highlight the metaversal concerns of digital sophisticates.

Metaverse Hype Befuddles Everyone–Even Techonomists

Ask 10 people to define the metaverse and you’ll get 15 different answers. The recent conference served to highlight the metaversal concerns of digital sophisticates.

Climate Tech, Compassion, and the Innovation Crucible: Highlights of TE22

Ask 10 people to define the metaverse and you’ll get 15 different answers. The recent conference served to highlight the metaversal concerns of digital sophisticates.

EU Proposes Immediate Loss and Damage Fund, Emissions Peak Before 2025

Ask 10 people to define the metaverse and you’ll get 15 different answers. The recent conference served to highlight the metaversal concerns of digital sophisticates.

Climate Tech, Compassion, and the Innovation Crucible: Highlights of TE22

Our recent three-day excursion into the future of innovation at TE22 brought together concerns about democracy, a semi-obsession with global warming, confusion about the metaverse, and predictions of an AI/robotic society. But we found general optimism that yes, there would be tools for all these challenges.

Maria Ressa told us freedom is at stake.

“Online harms turn into real world harms,” said 2021 Nobel Peace Prize laureate Maria Ressa. ”The weaponization of the legal system, or as my lawyers call it ‘lawfare’, is something…I hope doesn’t come to America. But the polarization of politics, the addiction to the technology, all of these things we share around the world.”

Ressa was scheduled to be our opening speaker at the recent Techonomy 2022 conference in Sonoma, California.  But she had to withdraw to attend to urgent filing of a final legal appeal to the Supreme Court of the Philippines in Manila. Ressa, journalist, activist, and thorn in the side of vindictive dictator Rodrigo Duterte, has been on trial on multiple spurious charges. She faces prison time of up to seven years in just this case, even as she also faces numerous other charges in separate cases.

Ressa sent a special video message to the conference, whose theme was “Innovation Must Save the World,”  expressing her grave concerns about the impacts of technology on our emotions, brains, and society. She also said this case will “basically determine whether I still have my freedom.”

Everything is interconnected.

The legendary Dr. Larry Brilliant (who gifted me a primo mask backstage, btw) is uniquely qualified to discuss how public health, the economy, geopolitics, the climate, and indeed everything, are profoundly interconnected. His overarching message: problems of all kinds must be addressed wholistically, cooperatively, and compassionately. Among his many accomplishments Brilliant, alongside a valiant corps of public health workers, decades ago helped lead the fight to eradicate smallpox, as an employee of the World Health Organization.

Brilliant joined David Kirkpatrick on stage at TE22 to discuss the connection between climate change and global health. He said the key thing to remember is that the fundamental causes of climate change are also antecedent causes of Covid–clear-cut rainforests, rising temperatures, changing seasons, famine, drought, and floods. “All of these things at the same time destroy animal habitat and put animals and human in each other’s habitats,” Brilliant said, “and that’s why we’ve gotten over the last ten years a cacophony of these viruses…SARS, MERS, Lyme disease, West Nile, Ebola, and now you have Covid…The major culprit is modernity. The most invasive species is us.”

(For a more detailed account of Brilliant’s inspiring appearance, read Meredith Salisbury’s report.)

Climate tech (like our precious planet) is hot, hot, hot.  

It’s not just wind farms and carbon capture anymore. Every company must become a climate company and every industry is ripe–overdue–for innovation. Among numerous speakers, Techonomy 2022 hosted leaders from biomanufacturing, solar power, sustainable aviation, next-generation nuclear power, electric vehicles, and climate investing. They all said climate tech is the future and is not a choice.

Mike Schroepfer is all in on climate. As the former longtime chief technology officer of Meta, he learned a thing or two about how to scale technology and organizations. (His direct reports at Facebook, now Meta, grew from 150 to 35,000 during his time there.) Now he is applying that knowledge to his new passion:  hyperscaling climate technology.  “If we’re going to solve these problems, we need to build lots of stuff. We’re going to need a lot more solar, wind, batteries…we’re going to need to decarbonize everything, electrify everything. And that’s going to require a lot of scale up…and we’re going to have to do it really quickly.” He believes that “hyperscaling,” heretofore applied by advocates like entrepreneur and investor Reid Hoffman to software businesses, can apply to hardware systems as well. We face the need for a build-out of sustainable infrastructure that is of unprecedented urgency.

The Environmental Defense Fund’s Kristin Tracz shared her optimism about the historic climate funding in the recently-passed U.S. Inflation Reduction Act. That and other recent “Biden bills” are expected to yield major innovations in the fight against climate change. Separately, IDA Ireland’s Maeve Cowley and Dr. Lorraine Byrne of Trinity College’s Advanced Materials and BioEngineering Research center said on stage that enlightened regulation, urgent R&D, and public/private collaborations have led to a thriving sustainable innovation ecosystem in Ireland and in Europe more broadly.

The metaverse is still up for debate. 

What is it? What does it look like? What should we call it? What are the use cases for individuals, businesses, and governments? If a “tree” falls in a “forest,” does it make a sound? Will we ever have legs?

Even more than for most nascent but promising technologies, there is excessive confusion and debate about the metaverse. But everyone agrees it will be–for better and/or worse–transformative and consequential.

The debate grew “spicy” at a Techonomy breakfast roundtable on the metaverse hosted by veteran tech journalist Robin Raskin. Unity CEO John Riccitiello said avatars are overhyped and often irrelevant. And Games for Change President Susanna Pollack pointed to the potential for learning and peace.

In response to a question in his main stage interview about the criticism Meta has faced for spending billions on cutesy avatars, Mike Schroepfer said: “All of that money is going towards long term R&D for the actual technological enablers for things I’ve seen in the lab, which is not a cartoon avatar, but someone that looks like a real, 3D person who’s talking to me in real time, who happens to be across the country…And you literally have this experience of ‘Oh my gosh. This is the closest thing I’ve ever had to being here in person without having to travel to see you.’”

(For a detailed account of the debates over the metaverse that prevailed at the conference, see Robin Raskin’s just-published report.)

To make progress in the world, we must also make progress within ourselves.

At the conference’s opening plenary, Esther Dyson, Vivienne Ming, and John Hagel tackled the ticklish topic of how to harness human nature to accelerate innovation. Silicon Valley icon Esther Dyson, who now focuses with her Wellville project on community-based health solutions, says the tech industry itself suffers from addiction. She argues that tech and venture capital’s addiction to big exits and big profits is fostering a culture of short term thinking and a dearth of socially-minded leadership. Author John Hagel says at the core, fear is what holds people back, but that we can learn to tame it and use fear as a tool. Neuroscientist Vivienne Ming’s research counterintuitively found that access to unlimited information, which we all have these days at the tips of our fingers, is sapping creativity and slowing innovation. Her antidote: we must go deeper and keep searching for novel answers. And even when we succumb to things like scrolling Instagram, our welfare and creativity depends on making sure we also ensure we periodically sit back and think deeply.

In a different session that took a similar theme, Autodesk CEO Andrew Anagnost made the case for what he called “innovation without ego.” Too often, posited Anagnost, tech industry CEOs focus more on notoriety and wealth creation than societal stewardship. This leaves leaders always searching, often in vain, for the elusive next big thing. Executives must put stakeholders and long-term vision first, he said.

Keep an eye on advancements in AI and robotics.

Ken Washington, former CTO of Ford and now Amazon’s VP of software engineering, spoke with Worth Magazine’s Dan Costa about how robots can evolve to improve the world. Amazon’s new home robot Astro, for example, is one adorable first step into the coming age of robotics. Amazon envisions home robots that improve safety and security, help people stay connected and eventually assist them with aging in place. And perhaps keep things tidy?

Tekedra Mawakana, CEO of Alphabet’s self-driving car division Waymo, explained the complex challenges her company faces of perfecting driverless vehicles. When lives are at stake, she says, progress must be well-researched and deliberate. But Waymo’s progress is coming along nicely. The company is currently expanding into some neighborhoods in Los Angeles.

Serial entrepreneur Dan Neely and  Nina Schick, author of Deep Fakes: The Coming Infocalypse, discussed the promises and dangers of generative AI, which creates surprisingly-compelling and convincing images, video, and audio in response to spoken or written prompts. With the public now able to whip up AI-generated images at will and whim, Neely launched a new company, Vermilio, at the conference. It aims to help solve challenges of IP ownership and image authenticity posed by the generative AI revolution.

These conversations shouldn’t stop here. Our world is at a critical juncture and all businesses across all industries must innovate to drive climate solutions, health, and equity for all. Join us this spring in Silicon Valley for the second annual Techonomy Climate conference, where we will dive even deeper into the tech we need to save our planet.

Related Posts
See All

Corporate Values Take Center Stage at Davos 2023

Our recent three-day excursion into the future of innovation at TE22 brought together concerns about democracy, a semi-obsession with global warming, confusion about the metaverse, and predictions of an AI/robotic society. But we found general...

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

Our recent three-day excursion into the future of innovation at TE22 brought together concerns about democracy, a semi-obsession with global warming, confusion about the metaverse, and predictions of an AI/robotic society. But we found general...

A Farm in the Cloud

Our recent three-day excursion into the future of innovation at TE22 brought together concerns about democracy, a semi-obsession with global warming, confusion about the metaverse, and predictions of an AI/robotic society. But we found general...

Davos 2023: Rethinking the Economics of Sustainability

Our recent three-day excursion into the future of innovation at TE22 brought together concerns about democracy, a semi-obsession with global warming, confusion about the metaverse, and predictions of an AI/robotic society. But we found general...

Is the Internet Industry Imploding? | David Kirkpatrick + Andrew Keen