How to Fortify Your Cyber Strategy in the Wake of the T-Mobile Hack

Enterprises need to protect every aspect of the cyber journey, from identification to prevention to recovery.

We are one month into 2023, and another major cyberattack has occurred. This time 37 million T-Mobile customers were impacted by a “bad actor” who gained access to personal data, including names, addresses, emails, phone numbers, and more. The hack occurred in November, and T-Mobile hired an external cybersecurity team to investigate. The company now believes the hack is “fully contained.” 

The true financial and personal impact of this hack is unknown, but it’s never been more critical to discuss cybersecurity’s future. That’s because this incident comes on the heels of what many would call a turbulent year. 2022 was not only dominated by headlines of an economic recession and geopolitical tensions – there was also an ongoing stream of reports of cyber issues. The year started with shocking data just days after Russia invaded Ukraine in February that there had already been a 196% increase in cyberattacks on Ukraine’s government-military sector. From there, we saw an alarming number of breaches against U.S.-based enterprises, including Twitter, Fast Company, DoorDash, and more, not to mention international organizations. 

As a result, the global average cost of a data breach grew from USD $3.86 million in 2020, to $4.24 million in 2021, to an all-time high of $4.35 million in 2022. In 2023, the global annual cost of cybercrime as a whole could top $8 trillion, potentially reaching a whopping $10.5 trillion by 2025.

Cybersecurity quickly moved up to be in the top three priority items on the list in corporate board rooms in recent years. In 2023, it quickly needs to move to the top of the list. Executives agree, with two-thirds considering cybercrime the most significant threat in the coming year. And rightfully so, with another major trend in cybersecurity being increased regulation around reporting and data privacy, with the Federal Trade Commission, Food and Drug Administration, Department of Transportation, Department of Energy, and Cybersecurity and Infrastructure Security Agency all working on new rules starting in the middle of 2022. 

However, ensuring an enterprise is as secure as possible is easier said than done. Cybersecurity has gotten more complicated in recent years (a serious understatement). Enterprises have made strides in investing in technology, digitization, and innovation; at the same time, cybercriminals have been doing the same. IoT, cloud computing, and more have brought business efficiencies and processes into the next era, while also inadvertently exposing even larger attack surfaces and helping to facilitate increasingly sophisticated attacks. This will only continue as we usher in Web 3.0, AI, metaverse, and other new, exciting technologies that come with their unique unknown implications – quantum computing, for example, has already proved to have the potential to break security encryption keys, posing a significant challenge. 

Organizations need to take a more holistic approach to cybersecurity, protecting every aspect of the attack journey, from identification to prevention, to recovery. Here’s a guide for how to do exactly that:

  1. C-suite leaders – yes, even the CEO – need to ask themselves: Am I aware of my company’s cybersecurity posture? Do I know how I’m positioned versus my industry peers? Am I aware of where investments will keep the company secure – and how are we preparing for what comes next as the bad guys continue to get smarter? This is no longer something to hand off to the CISO or IT teams and forget – if a breach occurs, top leadership needs to move in lockstep, in real-time, to curb the impact. With automation, companies can more quickly and cost-effectively identify the actual cybersecurity risk, sometimes reducing exposure time from 50 days to as little as three days, resulting in $82 million in potential savings. 
  2. With a better view of where the most significant risks lie, investments can be made into appropriate solutions to prevent attacks from occurring. Consider a government agency, where employees have historically been asked not to bring their smartphones into the office. Tracking and listening can occur outside the physical office, which becomes especially important as hybrid work models continue to be prevalent across industries. Hardware in the form of an exocomputer can protect smart devices at all times – not just while people are in the office – from audio and video capture, location tracking, and remote wireless attacks. Another example is how voice can be used to prevent fraud in customer service and beyond. Our voices are as unique as our fingerprints, and companies need to harness that fact. AI can be deployed to not only get a caller to the appropriate call center rep – it can authenticate their actual voice, even with the ability to catch deep-fakes, which helped an up-and-coming player in the voice prevention field catch $2 billion in attempted fraud across 5.3 billion calls. 
  3. All that said, companies need to be prepared if a breach occurs. While most hope to avoid this by focusing on identification and prevention, efficient and fast recovery is the final puzzle piece. With a single, centralized platform to secure data across the enterprise, in the cloud, and in SaaS applications, teams can more easily recover from attacks like ransomware, restoring servers and data sometimes in a matter of hours. There are also solutions out there now that stimulate hypothetical cyberattack events, allowing teams to test the sequence, timing, and potential weak points of their recovery strategy. 

The innovation needed in this new era of heightened cyber threats will be driven by startups, which historically can move with tremendous speed, positioning them well to keep up and stay ahead of bad actors. Enterprises should consider empowered partnerships with these innovators to build a security architecture that supports the other strides they have made recently in new technologies and next-generation solutions. 

As we continue to see more companies report layoffs, especially in tech, and pull back on spending in response to the uncertain economic climate, cybersecurity must remain at the top of the investment list in 2023. What’s more, investments in cybersecurity cannot one-and-done – there are so many aspects to staying secure that a holistic approach with investments across the board are vital. After this month’s T-Mobile hack, cybersecurity is more important than ever before. The time to ensure your company is completely covered is truly now or never. 

Related Posts
See All

How to Fortify Your Cyber Strategy in the Wake of the T-Mobile Hack

Enterprises need to protect every aspect of the cyber journey, from identification to prevention to recovery.

Fossils Should Pay Trillions to Store Carbon through 2050, Ex-Industry Execs Say

Enterprises need to protect every aspect of the cyber journey, from identification to prevention to recovery.

Corporate Values Take Center Stage at Davos 2023

Enterprises need to protect every aspect of the cyber journey, from identification to prevention to recovery.

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

Enterprises need to protect every aspect of the cyber journey, from identification to prevention to recovery.

Penny Pritzker: Tech & Government Must Work Together

Penny Pritzker was U.S. Secretary of Commerce in the Obama administration from 2013 to 2017. An entrepreneur herself, she chaired the Presidential Ambassadors for Global Entrepreneurship (PAGE) program and now leads PSP Capital Partners, a […]

Penny Pritzker was U.S. Secretary of Commerce in the Obama administration from 2013 to 2017. An entrepreneur herself, she chaired the Presidential Ambassadors for Global Entrepreneurship (PAGE) program and now leads PSP Capital Partners, a global private investment firm. We spoke to her on the sidelines of our recent Techonomy 2017 conference (under 2-minute video).

“Tech leadership should be creating an opportunity for people in their organizations to go work in government for a year or two,” she said.

She also appeared onstage with GE’s Beth Comstock.

Related Posts
See All

How to Fortify Your Cyber Strategy in the Wake of the T-Mobile Hack

Penny Pritzker was U.S. Secretary of Commerce in the Obama administration from 2013 to 2017. An entrepreneur herself, she chaired the Presidential Ambassadors for Global Entrepreneurship (PAGE) program and now leads PSP Capital Partners, a...

Fossils Should Pay Trillions to Store Carbon through 2050, Ex-Industry Execs Say

Penny Pritzker was U.S. Secretary of Commerce in the Obama administration from 2013 to 2017. An entrepreneur herself, she chaired the Presidential Ambassadors for Global Entrepreneurship (PAGE) program and now leads PSP Capital Partners, a...

Corporate Values Take Center Stage at Davos 2023

Penny Pritzker was U.S. Secretary of Commerce in the Obama administration from 2013 to 2017. An entrepreneur herself, she chaired the Presidential Ambassadors for Global Entrepreneurship (PAGE) program and now leads PSP Capital Partners, a...

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

Penny Pritzker was U.S. Secretary of Commerce in the Obama administration from 2013 to 2017. An entrepreneur herself, she chaired the Presidential Ambassadors for Global Entrepreneurship (PAGE) program and now leads PSP Capital Partners, a...

Engineering with Nature to Face Down Hurricane Hazards

Natural and engineered nature-based structures offer promise for storm-related disaster risk reduction and flood mitigation, as long as researchers can adequately monitor and study them.

The 2020 and 2021 Atlantic tropical storm seasons were extremely busy, ranking as the first and third most active on record, respectively. Thirty named storms occurred in 2020 alone, with 11 making landfall in the United States (a record) and 13 making landfall elsewhere (also a record) around the Caribbean, the Gulf of Mexico, and even Portugal (a first). The 2021 Atlantic season produced another 21 named storms. The 2022 season, which ended officially on 30 November, was closer to average, producing 14 named storms, although several—Hurricane Ian especially—caused extensive damage.

In addition to direct impacts of episodic tropical storms, much of the coastal Atlantic region increasingly has experienced extreme rainfall, storm surges, and even fair-weather inundation exacerbated by sea level rise. Such compound weather-related disasters reveal new and heightened vulnerabilities affecting broad swaths of people, particularly in marginalized and underserved communities.

Protecting coastal communities from the effects (and aftereffects) of repeated pummeling by tropical storms and other disasters requires an all-hands-on-deck approach, with contributions from physical, biological, chemical, and social scientists as well as from engineers, policymakers, and community members. Building this resilience also requires new solutions.

Much of the needed innovation to meet the challenges of natural disasters will come from modern engineering expertise. Yet nature itself provides time-tested examples of resilience and recovery from which we can learn. Natural and nature-based features (NNBFs) can support coastal resilience and mitigate flood risk while providing ecosystem services. Berms and dunes, for instance, are nature-based features that can be engineered or enhanced along coastlines to minimize flooding and storm damage in communities and to achieve ecosystem restoration goals.

Understanding how NNBFs perform under extreme hydrometeorological hazards and in other natural disasters in comparison with traditional infrastructure is critical. This understanding requires thorough monitoring across the life cycles of disaster events, including data on conditions before, during, and after, which allow researchers to evaluate how physically effective and cost-effective NNBF projects are in achieving their intended purposes, whether they involve coastal engineering, sustainable management, wetland restoration, or natural hazard reconnaissance. But collecting these data presents major challenges.

In a 2021 workshop hosted by the Network for Engineering with Nature (N-EWN), a multidisciplinary group of experts discussed the current state of coastal disaster monitoring, necessary communication and collaboration among researchers, improving experimental design and data collection, funding for monitoring and studies of NNBFs and natural features that serve as proxies for EWN approaches, and enhancing community involvement in monitoring projects. N-EWN, launched in 2020 through a partnership between the U.S. Army Corps of Engineers (USACE) Engineering With Nature (EWN) program and the University of Georgia’s Institute for Resilient Infrastructure Systems, is a community of researchers, educators, and practitioners advancing EWN solutions. As founding partners of N-EWN, our interest here is in identifying and applying lessons learned from this workshop to improve the case for EWN approaches as a means to raise community resilience to climate change and extreme weather events.

Communication, Collaboration, and Mobilization

The increasing frequency and intensity of disasters make organizational communication and collaboration among disaster monitoring groups, such as emergency management agencies, other state and federal agencies, and academic research groups, more important than ever. Growing coastal populations and competition for limited disaster-planning and response resources from federal, state, and local organizations trying to cope with other types of disasters, including the COVID-19 pandemic, further increase this need. Effective communication allows groups and agencies to share essential knowledge, information, and warnings among each other and with affected communities.

The need to establish avenues of communication between lab groups (including between field and office collaborators) and determining the severity levels of events that require responses before the tropical storm season starts are important lessons. Preseason conversations allow researchers and planners to share approaches, coordinate monitoring efforts, and develop collaborations in advance.

Outlining exact plans for storm monitoring is challenging because real-world events present unexpected twists and turns, so researchers must have flexibility in their methodologies. Knowing the approaches of other groups studying similar natural features or disaster types helps researchers develop this flexibility, refine methods, and standardize data collection and reporting. Furthermore, collaborating in multiregional teams with diverse disciplinary expertise can allow for more combinations of opportunistic data gathering and formal experimental designs to be used to gather information and respond to events as they happen with a wider array of approaches.

Another lesson workshop participants reported is that when monitoring storm impacts in the field, having site-specific information and local assets accessible, along with knowledge of available methods, helps facilitate successful deployments and retrievals of sensors for data collection. The accessibility of study sites, for example, can be determined by leveraging local knowledge and experience of the best access points and most vulnerable locales at sites.

A team’s ability to mobilize organized responses before and after disaster events is as crucial as communication. This ability requires time and preparation. Researchers must design responses that account for the type of coastline affected and for compounding events like associated flooding or saltwater intrusions. The planned response will differ, for example, in an urban area versus a natural area like an oyster reef or an engineered area such as near seawalls or submerged breakwaters.

Before an impending storm, response teams typically check storm surge forecasts and venture to areas that might be affected to assess infrastructure and the accessibility and safety of potential sites. This sort of preparatory procedure maximizes organized and standardized data collection and can help researchers avoid using a more haphazard shotgun approach, which may be costly and unfruitful. For example, stronger-than-expected winds or storm surges may dismantle sensors and equipment that aren’t well sited, contributing to data loss and damage. Proper preparation also improves poststorm analysis, and standardized data collection allows for comparative analyses across multiple storm events.

Workshop discussions revealed several important elements that research teams must consider in planning effective postdisaster monitoring. These include identifying where to collect data, obtaining permits and access to sites, and planning safe travel to and from sites. Having enough time to address these issues can be the most important factor in determining which storm events to respond to. This decision can also be driven by what equipment needs to be deployed or removed to avoid damage during an event. Depending on the capabilities of a given group and conditions on the ground, the events chosen may be relatively small, like compounding sequential tropical storms, rather than big events like major hurricanes.

Supporting Baseline Data Collection

Collecting time-sensitive disaster event data is costly. Most research groups have focused on monitoring after storms because of difficulty obtaining funding to gather prestorm data. Workshop participants actively involved in such monitoring pointed to the National Science Foundation, the U.S. Coastal Research Program, the NOAA Effects of Sea Level Rise Program, NOAA’s Office of Oceanic and Atmospheric Research, and state-level funding as sources that fund poststorm evaluations.

Prestorm monitoring is just as important, however. Baseline data are necessary to understanding environmental and natural infrastructure changes related to storm protection and resilience and to informing stakeholders and policymakers of these changes. Future storm seasons will provide excellent opportunities to assess NNBFs if baseline data are collected and available beforehand and resources and plans are in place to pair them with data collected after storm events.

Teams must convince funding sources of the importance of baseline studies so that these sources understand the opportunities for and value of gathering prestorm data and conducting long-term monitoring. This funding will allow researchers to better prepare research questions, such as how different structures and environments will handle—and bounce back from—extreme weather damage. It can also offer additional advantages, including facilitating more statistically sound planning, inclusion of complex experimental designs, and clarification of and coordination across data sets to be collected (e.g., flooding, bathymetry, soil inundation, vegetation impacts, erosion).

In addition to collecting baseline data in future storm seasons, there are opportunities to identify and use existing data from past storm seasons to demonstrate the performance of natural features during storms and relate that performance to NNBFs. For example, researchers used data from Hurricane Sandy in 2012 to evaluate how effectively natural ecosystems, such as coastal wetlands, reduced wave impacts, absorbed floodwaters, and mitigated damage across 12 states. They found that wetlands in the region studied prevented $625 million in damages from flooding, pointing to the importance of incorporating natural and restored habitats into NNBF designs.

Involving Affected Communities

Another major point of discussion at the workshop was the need to enhance engagement and share knowledge from disaster monitoring work with—and gain partners among—affected communities. Participants suggested attending community outreach events for these purposes, because there is much to be learned from people living in affected areas. Engaging local stakeholders such as homeowners and local natural resource users also helps research groups identify specific areas of interest when conducting regional surveys. Locals can direct research groups to important recreational areas, fisheries, and reefs, for example, and to residential areas and vulnerable infrastructure, such as hospitals and care facilities, that may be heavily affected by storms.

An important recommendation is for scientists to encourage federal agencies like the U.S. Geological Survey, NOAA, and the Federal Emergency Management Agency to facilitate community participation. These agencies can provide additional vital resources such as historical data, computing infrastructure to process and share data, access to sites, equipment, and personnel outside the monitoring teams. Such efforts can enable marginalized, underrepresented, and frontline communities often excluded from the scientific process to produce knowledge valuable for local safety and well-being.

Hurricanes Katrina in 2005, Harvey in 2016, and Michael in 2018, among other events, have revealed racial, socioeconomic, and geographic disparities in how communities recover from extreme events. For example, more affluent communities rebuilt or repaired infrastructure, whereas many communities of color and impoverished areas still show signs of damage from these storms, years later. Considerations of equity, vulnerability, and resilience should be woven into project planning in disaster monitoring and studies. Partnering with social scientists, physical and biological scientists, and community members can help reduce gaps in the understanding and quantification of climate vulnerability of disenfranchised and marginalized communities through multidisciplinary approaches.

Community science is an emerging way to engage community members in observational monitoring rather than being only subjects of study by outsiders. Pointing residents toward community science projects benefits both communities and researchers exploring storm events.

Many such projects exist, including those related to NNBFs. SandSnap, for example, is a collaborative crowdsourcing application created by USACE, James Madison University, and MARDA Science LLC that allows community scientists to assist in building a globally accessible, public database of coastal sediment grain sizes simply by uploading mobile phone photos of beach sand. Researchers use the database to quantify storm resilience and gather information on beach nourishment projects that use sediments dredged from navigation channels. These efforts, in turn, can lead to more effective and cost-efficient approaches to coastal protection.

Other efforts in which government and nongovernmental agencies have partnered include citizenscience.gov and iCOAST. These partnerships provide resources to help design and initiate community science projects and show how communities can be trained to carry out local measurements and testing. Programs such as these allow community members to gain crucial firsthand knowledge that broadens their understanding of their environment and may inform local and regional policy.

Building the Case for Nature-Based Solutions

EWN-type solutions that align natural and engineered processes—through NNBFs, for example—offer huge potential to support coastal resilience to hurricanes and tropical storms, reduce associated flood risks, and boost beneficial ecosystem services. Making a case for these nature-based solutions requires dedicated and detailed monitoring of how they respond to the compounding effects of storms and flooding.

Amid ongoing climate change, which is amplifying risks from these events, researchers must be equipped with the necessary tools and resources to conduct this monitoring. The knowledge gained will ultimately inform the design and implementation of NNBFs and EWN solutions to equitably protect communities in the face of potential disasters.


Krystyna Powell and Safra Altman (safra.altman@usace.army.mil), U.S. Army Engineer Research and Development Center, Vicksburg, Miss.; and James Marshall Shepherd, University of Georgia, Athens

Related Posts
See All

Fossils Should Pay Trillions to Store Carbon through 2050, Ex-Industry Execs Say

Natural and engineered nature-based structures offer promise for storm-related disaster risk reduction and flood mitigation, as long as researchers can adequately monitor and study them.

Jet-Propelled Tunicates Pump Carbon Through the Oceans

Natural and engineered nature-based structures offer promise for storm-related disaster risk reduction and flood mitigation, as long as researchers can adequately monitor and study them.

Corporate Values Take Center Stage at Davos 2023

Natural and engineered nature-based structures offer promise for storm-related disaster risk reduction and flood mitigation, as long as researchers can adequately monitor and study them.

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

Natural and engineered nature-based structures offer promise for storm-related disaster risk reduction and flood mitigation, as long as researchers can adequately monitor and study them.

The Fusion News is Better Than You Think

The news about fusion nuclear power is even better than many of the reports this week suggested. Yes a government lab made a major and historic breakthrough. But the best is yet to come–from startups.

It was a good week for progress towards reducing global heating. The good news about the fusion technology breakthrough is highly meaningful, even if it takes a while to prove out. But it will take a lot less of a while than you may have thought, based on the overly-cautious articles everywhere.

The fusion news, well explained here by the New York Times’ Kenneth Chang, was that a U.S. government research project at Lawrence Livermore Laboratories in California for the first time ever generated more energy using nuclear fusion than went into the reaction itself. This is exciting because nuclear fusion, if it can be made to work, is as close to a “silver bullet” for global heating as could be found. It uses inexpensive and widely-available fuel, emits no harmful climate gasses, and generates no longterm radioactive waste, as does traditional nuclear power, which uses fission. But most reports incorrectly, in my view, downplayed the likelihood fusion could be a near-term help in our grim battle against a rapidly-warming earth.

Livermore scientists made cautious statements around the recent announcements, like here where lab director Kimberly Boudil said commercial-scale fusion systems could be operating in “several decades.”

But the Livermore facility is old and uses an outdated laser design. It also happens to be the world’s most powerful laser, because it benefitted from vast U.S. government largesse, costing $3.5 billion to build starting in 1997. The ample money was there because it was designed to assist in maintaining nuclear weapons. There is plenty of government money for weapons. The just-passed 2023 U.S. defense budget totals $858 billion! If we spent anything like that annually to combat global heating, we would certainly triumph against its worst effects.

Many commentators contrasted fusion, which involves pushing atoms together to generate energy, with fission, which means breaking atoms apart, and asserted a contrast between potential green energy and nuclear weapons. But that is a serious oversimplification. The reason Livermore got all that money for a fusion laser system is because modern “hydrogen bombs,” or so-called thermonuclear weapons, use both fission and fusion to cause vast explosions. The fission sets off fusion, which radically increases a bomb’s destructive power. Fusion reactors for green energy alone, however, generally pose no risk of explosion.

Livermore studies fusion in order to study nuclear weapons. That fusion’s potential as an energy source could be imputed from the operation of its machines is essentially an afterthought.

So a government nuclear weapons-related system, powerful though it is, will not likely be our best demonstration of fusion’s potential. Much better and cheaper purpose-built systems are now being built. So it’s possible to be more optimistic about prospects for fusion than most articles suggested.

There is a gigantic, growing, global ecosystem of startups and government projects devoted to creating modern fusion systems for green power. One expert recently told us there are 32 fusion startups worldwide. At our recent Techonomy 2022 conference, former-Meta CTO-turned-climate-investor Mike Schroepfer raved about his investment, Commonwealth Fusion Systems, an MIT-spinoff based in Massachusetts which has raised more money for fusion than any other private effort–$2 billion since 2017, according to Crunchbase.

An image of what Commonwealth Fusion Systems calls “the world’s strongest high temperature superconducting  magnet.” Commonwealth is the best-funded fusion tech startup, and its unique and powerful tech, using such magnets, has won it tons of investment capital and government support. (Photo: Commonwealth Fusion Systems)

Eager and informed aggressive entrepreneurs like Schroepfer are convinced we are on the threshold of a new era of “hyperscaling” climate-related physical systems. We’re going to be able to build super-complex physical installations much faster than we ever before could, because the stunning and scary pace of global heating means we have to. And because software, AI, and even probably quantum computers will help speed the design, planning, and implementation for such systems and the manufacturing infrastructure needed to build them. And this will likely apply to fusion, perhaps most of all. Commonwealth’s website targets commercial systems to begin sending energy into the grid in 2025! Optimistic entrepreneurs are refreshing. That date is unlikely. But in any case such ambitions, which are widespread in the fusion startup community, raise doubts about a “several decades” attitude. Commonwealth uses the more widely-popular magnetic containment method for generating and containing a fusion reaction, instead of lasers like those used at Livermore. Like several other startups, it has its own super-powerful “Tokamak” magnet architecture. (Fusion reactions generate heat comparable to that of the sun, but are suspended in space by a magnetic field in part to prevent them burning up whatever equipment they are near.) Commonwealth has won 15 awards from the U.S. Department of Energy and is in multiple joint development projects with it and other national governments.

And there are other sophisticated commercial efforts. There’s TAE Technologies, a California-based fusion startup which has raised $1.2 billion over its many-year history, including $250 million this year from investors including Google, Chevron, and Sumitomo. Then there’s Helion Energy, near Seattle, which has raised $577 million. Also in Seattle is Zap Energy, one of whose leaders spoke at Techonomy Climate last March. It has what Zap says is a cheaper yet promising way to create and maintain a fusion-engendering magnetic field, based on technology created by scientists from the University of Washington. It has raised $203 million.

Whatever turns out to work best with fusion will be scaled up very quickly and put on the grid in as many countries as possible. In this crisis, past deployment timelines no longer offer a reasonable framework for prediction. That’s one reason investors in the many startups think it’s a good investment.

Governments too are working on energy-only fusion experiments. Most notable is the International Thermonuclear Experimental Reactor, or ITER, in southern France. Promisingly, it is a joint effort, funded by China, the European Union, India, Japan, Russia, South Korea, and the U.S. But in perhaps another sign that we shouldn’t only rely on governments for progress, the project is way past its original timeline, and its cost has risen from an estimated 10 billion Euros to several times that. It was meant to start operating in 2018 and is now hoped to begin in 2025.

A 2018 aerial view of the construction site for the transnational fusion power collaboration ITER, in Provence in southern France.

Developing and deploying fusion energy is a hugely ambitious global project so we should all hope for the best. And ITER’s collaborative nature means its fruits will be widely shared, which is also great.

But with all due respect and honor to the successful scientists at Lawrence Livermore, I expect good news in coming years to flow more widely from the private sector.

Caveat: Prospective progress in fusion is no panacea. We still are in an existential battle and all our weapons must be deployed. We have to continue absolutely everything in all realms right away to reduce current emissions and figure out ways of removing greenhouse gasses already released. 

Related Posts
See All

How to Fortify Your Cyber Strategy in the Wake of the T-Mobile Hack

The news about fusion nuclear power is even better than many of the reports this week suggested. Yes a government lab made a major and historic breakthrough. But the best is yet to come–from startups.

Fossils Should Pay Trillions to Store Carbon through 2050, Ex-Industry Execs Say

The news about fusion nuclear power is even better than many of the reports this week suggested. Yes a government lab made a major and historic breakthrough. But the best is yet to come–from startups.

Corporate Values Take Center Stage at Davos 2023

The news about fusion nuclear power is even better than many of the reports this week suggested. Yes a government lab made a major and historic breakthrough. But the best is yet to come–from startups.

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

The news about fusion nuclear power is even better than many of the reports this week suggested. Yes a government lab made a major and historic breakthrough. But the best is yet to come–from startups.

AI Startup NWO.ai Helps Ukraine Combat Russian Disinfo

NWO.ai was designed to spot emerging trends online. But its ability to discern things like emotional manipulation and disinformation tactics enabled the software to become an effective weapon in Ukraine’s war against Russian invaders.

Three-year-old startup NWO.ai, has an AI crystal ball of sorts – its algorithms analyze online text at gigantic scale. The software was designed to enable marketers and corporate communicators to detect emerging online trends and memes. But it can also discern things like emotional manipulation and the use of loaded language, that are often associated with disinformation. That has enabled it to become an unconventional but effective weapon in Ukraine’s war against its Russian invaders.

In a Gilded Age building in New York’s Flatiron District, Pulkit Jaiswal, the company’s co-founder, works with a seasoned crew of data scientists and machine learning engineers to repurpose its powerful AI engine to analyze millions of Russian propagandist texts and identify patterns, to thwart potential incipient disinformation campaigns. The software automatically shortlists Russian-origin signals that have a high probability of breaking out and making the rounds worldwide. Today, NWO.ai supports multiple teams in Ukraine, including The Centre for Strategic Communications and Information Security.  The Centre is at the forefront of defending Ukraine and the world against a global misinformation war.

Since founding NWO.ai in 2019, Jaiswal has assembled data scientists and engineers who have taken his learnings from the hedge fund world and translated it into the world of consumer trends – helping a slew of Fortune 500 companies, including Diageo and Proctor & Gamble, stay ahead of the competition. Whether it’s predicting that seaweed extract will be the future of skincare or that cucumber fragrances in vodka might be the next big liquor trend, NWO.ai presents data in a handsome visual form that offers a range of predictions about where future concepts might go. Joining him as a co-founder is Sourav Goswami, a seasoned private equity executive who maps Jaiswal’s high-tech vision to real-world business applications. Their journey began with oat milk, when their system predicted it was likely to become a major trend around the beginning of the pandemic.  Informed in part by the analysis surfaced by NWO.ai’s platform, a client decided to buy a stake in a major oat milk brand. Needless to say, that was a smart move.

NWO.ai’s expansion into the government and national security sectors was always on the company’s roadmap. Still, its founders accelerated the process when they learned there was an opportunity to support Ukraine in its fight for freedom and global democracy. As Jaiswal and Goswami continue to grow their company’s business and customer base of commercial clients, they are working with the Ukrainian government because it understands better than most that in the war of words, the narrative is king, and today’s information battle is digital.

NWO.ai’s Ukraine connection evolved after I ran into Jaiswal at a conference.  I’m a native New Yorker of Ukrainian descent with experience organizing U.S.-Ukraine technology partnerships and projects, and a passion for helping the country defend itself.  After watching a presentation by Jaiswal at a conference, I realized NWO.ai’s technology could help Ukraine.  I approached him afterward and was impressed by his immediate interest in supporting the cause, which appears to have been born of a genuine instinct toward corporate social responsibility.

I support Ukraine together with OODA, a unique Reston, Virginia-based consultancy of top technologists and national security experts. I asked them to evaluate NWO.ai’s platform.  They were impressed.  Bob Gourley, OODA co-founder and former CTO of the Defense Intelligence Agency in the USA, told me, “NWO.ai is an exciting platform because it helps decision makers keep a tight OODA loop, the cycle of observe–orient–decide–act, developed by military strategist and United States Air Force Colonel John Boyd.”

With the vote of confidence from OODA and Gourley, who joined the NWO advisory board, I arranged for NWO to demonstrate its platform for Ukrainian experts in identifying and countering Russian disinformation.  Mykola Balaban, deputy head of the Centre, says, “Russia has dedicated vast resources to distort the information space and help achieve its military and geopolitical goals.  Part of the strategy is to flood the world with misinformation to confuse citizens and business and government leaders.  NWO.ai’s platform is powerful but also very user-friendly; that’s a rare and potent combination that makes our job easier. The technology has already proven to be helpful in our war efforts.”

NWO’s advanced Natural Language Processing (NLP) engine continuously transforms petabytes of unstructured narrative data into intuitive visual metrics and clear intelligence that can be used on the digital front line. NWO.ai’s ability to understand and bring order to massive amounts of complex data is key for all users of its platform.  It helps improve decision making by replacing subjectiveness with hard facts and data that anyone can easily understand through NWO.ai’s user interface.

Jaiswal is one of the youngest honorees ever in the annual MIT Innovators Under 35 List in 2015, and also was included this year in Forbes’ 30 Under 30 rankings.

This past February 15th, just over a week before the Russian invasion, NWO’s algorithms detected a Russian disinformation inflection point. A stream of Russian news articles and social media posts were calling for the “Denazification of Ukraine.” At the time, Putin falsely claimed that Ukrainian Neo-Nazi forces were committing “genocide” in the Donbas region. As the invasion drew closer, more misinformation came out of Russia to preemptively justify an attack. NWO.ai was able to quantify and measure the distribution of the misinformation narratives, in this example with “Ukraine Denazification,” across social media and news, to forecast the growth in these signals.

Putin used this rhetoric in calling for a full invasion days later. In moments like this, Jaiswal’s data analysts are cautious. “While we can pinpoint one narrative as a strong signal for the invasion, early identification of a number of these narratives and comparing them to historical ones is critical to make informed decisions,” he says. “This is where AI comes in – to help us identify these narratives early and often.”

For the duration of Russia’s war against Ukraine, NWO will provide the Centre and other stakeholders in Ukraine with free platform access and expert support.  The company also provides the Centre with weekly intelligence reports based on signals surfaced by the platform.  The Centre and NWO also recently agreed on a joint project to monitor and analyze the impact of RT, the Russian state-controlled television network, on citizens around the world.  The results of this project will be provided to leaders of Ukraine and its allies. Artificial intelligence has become one of the most powerful tools in modern warfare.

Related Posts
See All

Cleaning Up 80 Years of Plastic Waste

NWO.ai was designed to spot emerging trends online. But its ability to discern things like emotional manipulation and disinformation tactics enabled the software to become an effective weapon in Ukraine’s war against Russian invaders.

ChatGPT is the Deepfake of Thought

NWO.ai was designed to spot emerging trends online. But its ability to discern things like emotional manipulation and disinformation tactics enabled the software to become an effective weapon in Ukraine’s war against Russian invaders.

The EV Revolution Brings Environmental Uncertainty at Every Turn

NWO.ai was designed to spot emerging trends online. But its ability to discern things like emotional manipulation and disinformation tactics enabled the software to become an effective weapon in Ukraine’s war against Russian invaders.

How Do My — and Your — Greenhouse Gas Emissions Threaten Biodiversity?

NWO.ai was designed to spot emerging trends online. But its ability to discern things like emotional manipulation and disinformation tactics enabled the software to become an effective weapon in Ukraine’s war against Russian invaders.

Business is the Battery of Society: Reflections On Innovation

Techonomist John Suter shares his key insights from Techonomy 22 and charts a way forward. (Hint: Do not negotiate on climate.)

Late November at the Sonoma Mission Inn, Techonomy held an exciting 3-day conference with a mission to save the world Techonomy 22: Innovation Must Save the World. Cards were exchanged and connections were made amidst serious presentations and serious discussions.

The conference was full of wonderful people – both on stage and off. There was exciting new tech stuff – hydrogen planes with a thousand mile range, new plastics recycling models, new electric cars, new ways to power the grid, new virtual reality headsets, new this, and new that.

Yet there was also a recognition that this will not be enough.  Tech innovation alone will not get us where we need to go. We need an accelerator. During this same week Twitter, Meta, and Amazon laid off 30,000 employees. FTX crashed and burned.  In this last year $700 billion was invested in fossil exploration, up from the year before.  Someone is not getting the memo.

Words, Metaphors, and Mental Models

Words and metaphors help to create the framework of our thinking. “Communication” was central to many discussions and spurred many questions. Do our words get in the way, or do they light a path?  Do they help us formulate better questions?  The collapse of FTX brings us back to the basic question of what is money?  And what’s the role of accountability in maintaining the integrity of money?

What is real, and what is virtual or artificial?  Pushing a tiger will tell you whether or not it is made of paper. But where are the levers to push when the line between real and virtual is blurred?

Will innovation save the planet?  Technical Innovation is essential, but it must be combined with social innovation and financial innovation.

Is computer security as bad as the security companies say it is? Consumer Reports CEO Marta Tellado said that consumer power must shape innovation.

The presentation of the expanding Metaverse seemed to bring on nausea, but maybe it was just me.

Growth Types

Are our current social and economic systems working?  This seemed to be an undercurrent theme of the conference.  Capitalism is the most efficient way to extract the earth’s resources, but what do we do when we reach limits to growth? (For more on this see Sandrine Dixson-Decléve’s earlier Techonomy session.) Democracy and capitalism must solve real problems at all levels. An inability to do this will bring catastrophe beyond what most imagine.

DO NOT NEGOTIATE on Climate and Carbon. 

Analyst Isaac Stone-Fish was correct: We should not base our climate efforts on what the Russians or Chinese are doing.  We must simply do all that we can and support other countries in leapfrogging to renewable and sustainable energy sources.  

It will not be enough to look at your own carbon footprint or even that of your company.  It is easy to forget, but cutting down on carbon must be a primary goal for everyone, at all levels.  If that goal is not accomplished, then nothing else matters.  Beyond the tipping point, there is no return.

A Way Forward

Esther Dyson is aiming her efforts and resources at towns and small cities, presumably because that is where people meet, argue and solve problems.  Her Way to Wellville model is an example of effective, collective action at the local level. But will local action be fast enough? Not if change only comes from the top.

How we structure working groups and scale processes will be important. People must find ways to work together. They can then make progress in any or all directions.  Challenge them to find the courage to look, listen, and talk to each other, not to the top.  They should be the players and actors. Challenge them to make decisions. Look at nature. Plant seeds and water as a way to scale the effort.

We cannot fail in this endeavor. Mitigation only buys a bit more time. Wall Street could be a friend, but will it be?  Government can set the stage but business is the battery of society, able to get people to move and do things that government and other institutions cannot seem to do. We don’t have time to waste.

Related Posts
See All

How to Fortify Your Cyber Strategy in the Wake of the T-Mobile Hack

Techonomist John Suter shares his key insights from Techonomy 22 and charts a way forward. (Hint: Do not negotiate on climate.)

Fossils Should Pay Trillions to Store Carbon through 2050, Ex-Industry Execs Say

Techonomist John Suter shares his key insights from Techonomy 22 and charts a way forward. (Hint: Do not negotiate on climate.)

Corporate Values Take Center Stage at Davos 2023

Techonomist John Suter shares his key insights from Techonomy 22 and charts a way forward. (Hint: Do not negotiate on climate.)

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

Techonomist John Suter shares his key insights from Techonomy 22 and charts a way forward. (Hint: Do not negotiate on climate.)

Bringing Medicare Into the Digital Age

The United States spends $696 billion on Medicare – so reforming the existing plan distribution model couldn’t be more vital. The process should be as straightforward and stress-free as possible, and that way is through embracing technology.

Medicare’s annual enrollment period (AEP) began on October 15 and ends next Wednesday, December 7. Millions of older Americans are now embarking on the headache-inducing process of choosing a Medicare plan. They are trying to select from among thousands of different plan options in the hope of making the right choice for their healthcare needs.

Nearly 64 million Americans today are Medicare beneficiaries, and in the coming years, the number will likely grow to encompass 20 percent of the U.S. population.

However, despite the large number of Americans using Medicare, the current system for enrolling in Medicare could not be more broken.

Systemic Medicare Issues are Longstanding But Solvable 

  1. Most older Americans currently make their Medicare choices with a salesperson over the phone. The phone is a terrible communication setting for choosing a complex product, especially one that must align with your individual health needs and preferences. It is impersonal and a lousy medium for separating out detailed differences between alternative options; and it’s totally ill-suited for explaining the complexities associated with Medicare.
  2. The situation is made worse by the salespeople themselves who make calls and answer the phones when older Americans respond to Medicare ads.
    Medicare salespeople are incentivized to sell specific plans, not to help older Americans find the right plan for their own distinct needs. These salespeople are often undertrained, seasonal employees who want to secure the sale as fast as possible to generate more sales, which results in rampant mis-selling. Even if they wanted to select the most applicable plan for beneficiaries, it would be impossible for those salespeople to master the thousands of plans available with the thousands of different options within them.
  3. Since this entire process is conducted over the phone, older Americans must share extremely sensitive personal information – such as Medicare IDs – with these undertrained and often temporary workers, putting their personal data at increased potential risk of loss.

All of these reasons beg the question: why, in 2022, with numerous ways to streamline the process to make it easier for beneficiaries and carriers, are we still enrolling folks in Medicare like it’s 1989?

Evaluating Medicare’s Positioning in Today’s World

I came to work in health insurance having spent much of my career in property & casualty (P&C) insurance. At the time, I thought P&C was backward from a technology standpoint. But if P&C was a decade behind the cutting edge in technology, Medicare is a further 10 years behind that.

The result of this horrible distribution model is that, according to a 2021 Medicare literacy survey, a widespread lack of knowledge exists among beneficiaries when it comes to basic Medicare terms and available benefits. Three out of four Medicare beneficiaries from the survey described the program as “confusing and difficult to understand,” while half did not even know when the AEP began.

So, if the current system of selling plans via the phone is leading to general misunderstanding and frustrations, what can be done?

One answer might be regulation. Recent legislation – the Inflation Reduction Act, in particular – does address some longstanding issues that have plagued the Medicare system as a whole. Particularly, the act requires the federal government to negotiate prices for some high-cost drugs covered under Medicare. But this is not enough, seeing as many of the provisions under the Inflation Reduction Act that would lower out-of-pocket spending for Medicare beneficiaries do not actually come into effect until 2026. In a broader sense, we cannot expect legislation alone to fix other major issues, such as efficiency. For example, although Medicare premiums for Part B (the part of Original Medicare that covers doctor’s visits and outpatient care) are declining, other parts of Medicare are simultaneously becoming more expensive, resulting in overall costs actually increasing for beneficiaries.

Medicare Needs to Digitize for the New Generation of Older Americans

Medicare needs to catch up with other industries and embrace technology to help Medicare recipients now. Older Americans want more plan choices with options that better suit their lifestyles and an easier enrollment process. And today’s older Americans are not what stereotypes would cast them out to be. Gone are the days when older Americans need the help of their grandchildren to even find the internet browser on their computer. Most Medicare beneficiaries today know how to navigate the internet, and many welcome new technological advancements: over 76 percent of older Americans noted they are comfortable using the internet to choose their Medicare plans.

Online-based platforms can start making the enrollment process more efficient by allowing older Americans to take ownership over their Medicare plan selections. These tools, like the platform we built, Hella Health, can also serve as truly independent advisors. We provide older Americans with the relevant facts and costs upfront, so they are free to make an educated decision – instead of being pressured to pick a plan over the telephone. Digital platforms can empower people to make a choice that works for their particular situation among countless options.

Medicare enrollment should not be a nightmare. We spend $696 billion on Medicare – so reforming the existing plan distribution model couldn’t be more vital. The process should be as straightforward and stress-free as possible, and that way is through embracing technology.

Related Posts
See All

How to Fortify Your Cyber Strategy in the Wake of the T-Mobile Hack

The United States spends $696 billion on Medicare – so reforming the existing plan distribution model couldn’t be more vital. The process should be as straightforward and stress-free as possible, and that way is through...

Engineering with Nature to Face Down Hurricane Hazards

The United States spends $696 billion on Medicare – so reforming the existing plan distribution model couldn’t be more vital. The process should be as straightforward and stress-free as possible, and that way is through...

The Fusion News is Better Than You Think

The United States spends $696 billion on Medicare – so reforming the existing plan distribution model couldn’t be more vital. The process should be as straightforward and stress-free as possible, and that way is through...

AI Startup NWO.ai Helps Ukraine Combat Russian Disinfo

The United States spends $696 billion on Medicare – so reforming the existing plan distribution model couldn’t be more vital. The process should be as straightforward and stress-free as possible, and that way is through...

My Epiphanies at a Climate Innovation Breakfast

As the internet industry implodes, a session helped me realize that a newly-forming climate industry will replace it. Climate tech and climate-conscious companies will become the leading force in business and markets, a new force for wealth-creation.

I had a series of epiphanies at a breakfast session on “The Climate Innovation Opportunity” I moderated the other day at our Techonomy 2022 conference in Sonoma. Around the table in front of plates of eggs and toast were more than a few of the country’s leading thinkers about climate tech, climate investing, and climate action. Listening to them was powerful.

At that very moment, the internet industry, which I have been obsessively covering since its infancy, was more or less imploding. Meta and Amazon were each laying off more than 10,000 employees; Meta’s stock was down 75% in a year; an activist investor was, for the first time ever, demanding Alphabet/Google cut costs; and Elon Musk appeared to be willfully dismantling Twitter. I brought to the breakfast a conviction that a financial era was ending. While the net giants would all remain major profitable companies, no longer would they lead the world’s financial markets upwards, as they had for over a decade, creating the world’s first trillion-dollar companies in the process.

So as I sat there waiting for the breakfast to begin, I was ready for more than a discussion. I was ready to learn what would constitute the next era for business. Of course I didn’t realize it. That’s not what I thought the session was going to be about. But it was.

A few conclusions I came to from the discussion that morning:

  • Global heating will become more pronounced and more obvious with every passing day from now on. And society’s primary task in coming years will be to adapt and respond to it.
  • The next giant value-creation opportunity for business, investors, and the general public will be climate tech and climate action. The companies that invent the tools to take us to a carbon-free era, and those that remake their current businesses to thrive in an era of sustainability–will be where the money will be made in coming decades.
  • It is irrational to deny this inevitability. This has to be true. Otherwise we are finished, as a thriving global human society.
  • Business, not government, will lead the way to a carbon-free future. Unlike government, business and financial markets can evolve quickly and absorb innovation as it occurs.
  • A climate industry is on the horizon. It will be just as clear-cut as today’s internet industry.
  • The only way to defeat climate denial is with profit and wealth creation. You can’t deny a dollar. And the dollars will come to those who invest in climate solutions. The denialists, whether companies or individuals, will lose, and it will, soon enough, hurt them.
  • Climate capital can build the next economic era. If we do have a recession in the next year or so, climate investment and action will bring us out of it.
  • Markets will be driven in coming years by the companies that invent and deploy technologies and systems to combat global heating. The investors who get on board with those companies will be the ones who get the richest, just as was the case with early investors in Amazon, Apple, Facebook, Google, and Microsoft.
  • Systems thinking will be essential to lead us to the most powerful climate solutions, and while nobody is very good at it, business will do better than government.

None of this means net companies won’t profit, or that tech will not be the central tool that gets us to the verdant valley of climate success. After all, we live in a digital society now, and it will be digital tools that take us into this potentially-glorious future of climate consciousness.

In fact, the companies that now lead in the cloud-computing era will likely thrive as a more monitored and managed world emerges. We can’t improve what we can’t measure, and data–stored, analyzed, and deployed–will be what drives us into the new era. Artificial intelligence will help climate action, climate business, and climate profit.

But it is a new era, just when we needed one. The old era, when investors thought Facebook would forever go to the moon is over.

Coda:

The day before the breakfast, two conference plenary sessions had helped me arrive at my epiphanies:

  • Mike Schroepfer, longtime CTO of Facebook and Meta and now marshaling his wealth as a determined climate investor through his firm Additional Ventures, spoke with me on stage about his conviction that now, for the first time, software and other advances make it possible to “hyperscale” physical infrastructure for climate. We will, he says, be able to create the systems for an energy-efficient economy much faster than most have realized.
  • And immediately after Schroepfer, we heard from Paul Hohenberger of TerraPraxis. Here was a proof point for Schroepfer’s thesis (and evidence of the radical potential for growth in a carbon-conscious age). TerraPraxis is a nonprofit that develops and deploys systems–based on sophisticated software and data–to enable coal plants to quickly transition away from coal to newer more efficient heat and energy-production sources. Next generation nuclear is expected to be the primary method, but geothermal and other options can also be incorporated. The point is to have processes in place that factor in all the various systems entailed in a coal plants and its connection to the existing grid, so that a templated approach can be applied quickly to thousands of plants over a very short period. Microsoft is a huge backer of TerraPraxis.

Related Posts
See All

How to Fortify Your Cyber Strategy in the Wake of the T-Mobile Hack

As the internet industry implodes, a session helped me realize that a newly-forming climate industry will replace it. Climate tech and climate-conscious companies will become the leading force in business and markets, a new force...

Fossils Should Pay Trillions to Store Carbon through 2050, Ex-Industry Execs Say

As the internet industry implodes, a session helped me realize that a newly-forming climate industry will replace it. Climate tech and climate-conscious companies will become the leading force in business and markets, a new force...

Corporate Values Take Center Stage at Davos 2023

As the internet industry implodes, a session helped me realize that a newly-forming climate industry will replace it. Climate tech and climate-conscious companies will become the leading force in business and markets, a new force...

Hydrogen Patents Reveal Shift Toward Cleaner Technologies

As the internet industry implodes, a session helped me realize that a newly-forming climate industry will replace it. Climate tech and climate-conscious companies will become the leading force in business and markets, a new force...

Can the U.S. and Europe Agree on Rules for AI?

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and agreement is uncertain.

Just weeks after Joseph Biden was elected President of the United States in 2020, European Commission President Ursula von der Leyen, speaking to the Boston Global Forum, proposed that the U.S. and Europe develop a Transatlantic Agreement on Artificial Intelligence: “We want to set a blueprint for regional and global standards aligned with our values: human rights, and pluralism, inclusion and the protection of privacy.” Such a blueprint could guide other democracies, she said.

Von der Leyen explained why creating such a blueprint is imperative: “AI can have profound impacts on the life of the individual. AI may influence who to recruit for a certain post or whether to grant a certain pension application. For people to accept a role for AI in such decisions, they must be comprehensible. And they must respect people’s legal rights – just like any human decision-maker must.”

Governor Michael Dukakis, chair of the Boston Global Forum, replied, “We are…at one with President von der Leyen on the need for an international accord on the use of artificial intelligence, based on shared values and democratic traditions, an accord that will require sustained transatlantic leadership if it is to be realized.”

Speaking at the Munich Security Conference a few months later, President Biden addressed the impact of new technologies on democratic values, saying, “We must shape the rules that will govern the advance of technology and the norms of behavior in cyberspace, artificial intelligence, biotechnology so that they are used to lift people up, not used to pin them down.  We must stand up for the democratic values that make it possible for us to accomplish any of this, pushing back against those who would monopolize and normalize repression.”

These initial statements from EU and U.S. leaders established the foundation for the EU-US Trade and Technology Council, created in June 2021 to promote transatlantic trade aligned with democratic values. Two years on, the third ministerial-level meeting of the group is coming up next week on Monday, December 5th in Washington, D.C. The Center for AI and Digital Policy, which we lead, has created a resource page to help reporters, policymakers, and the general public follow the sometimes-complicated work of this critical body.

So it’s time to take stock of progress toward a Transatlantic Accord on AI between the U.S. and EU as they seek to advance their joint commitment to drive digital transformation and cooperate on new technologies based on shared democratic values, including respect for human rights. Despite the earlier statements, it’s not clear that significant progress on an accord will emerge from the upcoming Council meeting, which aims to deal with a raft of tech-related issues.

On the EU side, there has been steady progress on an EU AI Act. The Czech Presidency of the Council of the European Union has just wrapped up final changes for the Council position. The European Parliament is moving toward a final report on the proposed legislation. There remain decisions to make about the scope of regulation, the classification of AI systems, and an oversight mechanism.  Such actions depend on the outcome of the “trilogue” among EU institutions–the European Commission, the Council, and the European Parliament, but there is broad agreement on the need for an EU-wide law. And either in parallel with the EU Act or slightly afterward will come a Council of Europe Convention on AI. As with earlier COE Conventions, such as the Budapest Convention on Cybercrime, or Convention 108+ on data protection, the COE AI Treaty will be open for signature by both member and non-member states. That will open the possibility that it could enable a broader international AI treaty uniting democratic nations in support of fundamental rights, the rule of law, and democratic institutions.

But on the U.S. side, the story is more mixed. Secretary Blinken explained the government’s priorities in July 2021: “More than anything else, our task is to put forth and carry out a compelling vision for how to use technology in a way that serves our people, protects our interests and upholds our democratic values.” Although several bills have been introduced in Congress for the regulation of AI, there is no legislation currently heading to the President’s desk requiring safeguards on AI systems, algorithmic accountability or transparency. At the state and local level, new laws are emerging, such as the New York City AI Bias Law. At the federal level, President Trump issued Executive Order 13960 in December 2020, establishing principles for the use of AI in the Federal Government, and requiring federal agencies to design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law. However, adoption and implementation of the executive order across agencies varies widely.

In October 2022, The White House Office of Science and Technology Policy released the landmark report Blueprint for an AI Bill of Rights, which could provide the basis for AI legislation in the next Congress. A similar report by a U.S. government agency in the early days of computing led to comprehensive privacy legislation that established baseline safeguards and helped enable the adoption of computing systems across the federal government.

Still, the United States struggles with transparency and public participation in the formulation of its national AI strategy, in a way that might surprise citizens of other democratic nations. The notoriously-secretive National Security Commission on AI (NSCAI), established by Congress in 2018 and chaired by former Google CEO Eric Schmidt, issued a report in 2021 that emphasized the risk of falling behind China in AI, and then disbanded. But subsequently it spawned the Special Competitive Studies Project (SCSP), bankrolled personally by Schmidt. The SCSP has proposed, without irony, a new “technological-industrial” strategy that aims to direct federal funding to the tech industry to maintain a U.S. competitive lead over China. The group’s work muddies the waters, because while it appears to represent the American view, it ignores the social and political consequences of AI deployment.

There is also a newly-established National AI Advisory Committee (NAIAC) that is expected to prepare a report for the President and Congress in the next year on many AI issues, including whether “ethical, legal, safety, security, and other appropriate societal issues are adequately addressed by the nation’s AI strategy.” The Advisory Committee is also expected to make recommendations on opportunities for international cooperation on international regulations and matters relating to oversight of AI systems. But it does not seem to have been consulted about the upcoming meeting of the Trade and Technology Council.

The NAIAC has held two public meetings so far. Both essentially took place just as cyber-broadcasts, with little opportunity for public comment. A last-minute request for public comment before the most recent meeting in October elicited four responses, two from our organization. This process on the U.S. side contrasts sharply with extensive public participation during the early days of development of the EU White Paper on Artificial Intelligence, as well as the draft EU AI Act. Both drew widespread comment in Europe.

Ahead of the upcoming third Trade and Technology Council Ministerial, the EU-based Trade and Technology Dialogue invited a public exchange with the European Commission leaders participating in the meeting. But on the U.S. side, there has been no process for public participation in advance of the meeting, nor has the Commerce Department provided updates about the progress of its working groups.

The difficulties building the TTC transatlantic bridge are surprising, not only because of the earlier statements from EU and U.S. leaders and their apparent shared strategic interests, but also because the EU and the U.S. worked closely together earlier on a global framework for AI and democratic values. The U.S. as well as EU member states led the effort to establish the Organization for Economic Cooperation and Development (OECD) AI Principles, the first global framework for governance of AI. The OECD AI Principles state that governments should promote the development of trustworthy AI that respects human rights and democratic values.

According to POLITICO (subscription required), several announcements are expected at the upcoming meeting, including a “road map” for how trustworthy artificial intelligence can be developed to meet both EU and U.S. needs. That will include efforts, based on existing work from the OECD,  to create a common definition and methodology for how to determine if companies are upholding principles about what can and cannot be done with this emerging technology. Marisa Lago, U.S. Commerce Department Undersecretary for International Trade, recently said to the U.S. Chamber of Commerce: “We think that this is a mutual priority that is going to grow in scope as new AI applications come online and as more authoritarian regimes are taking a very different approach to the issues of security and risk management.”

Still, the recent announcements set a low bar compared with the first meeting of the TTC, when EU and U.S. representatives announced their intent to “cooperate on the development and deployment of new technologies in ways that reinforce our shared democratic values, including respect for universal human rights.” At that meeting in Pittsburgh, negotiators warned that AI can threaten shared values and fundamental freedoms if it is not developed and deployed responsibly or if it is misused. That statement called for responsible development of AI grounded in human rights, inclusion, diversity, innovation, economic growth, and societal benefit. And it specifically called out AI systems that infringe upon fundamental freedoms and the rule of law, “including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems.”

The EU and U.S. negotiators could, for example, follow the lead of Michelle Bachelet, the former High Commissioner for Human Rights at the UN. As Commissioner, Bachelet urged a moratorium on the sale and use of AI that poses a serious risk to human rights until adequate safeguards are put in place. She also called for a ban on AI applications that do not comply with international human rights law. We fully support that recommendation. Now would be the appropriate time for the EU and the U.S. to take at least one urgent step and end the use of facial recognition for mass surveillance, one of the most controversial applications of AI technology.

Part of the problem today is that many in the U.S. government, following the tech industry’s (and Schmidt’s) lead, view AI policy primarily through the China lens, a necessary but incomplete perspective. Since China is now Europe’s primary trading partner, efforts by the U.S. to align Europe behind a predominantly anti-China policy, as was attempted during the Trump years, is unlikely to succeed. And while there is support on the European side for a transatlantic call for “democratic values,” there is also growing skepticism and a belief that the U.S. formulation is little more than a trade policy aimed at conferring national economic advantage.

But von der Leyen’s call for a transatlantic AI accord based on human rights, pluralism, inclusion and the protection of privacy resonates today on both sides of the Atlantic. Indeed, the first goal of the TTC, endorsed by von der Leyen and Biden, was to ensure that the EU and the U.S. “Cooperate in the development and deployment of new technologies based on shared democratic values, including respect for human rights.”

Both the U.S. and the EU must now quickly take concrete steps as the challenges of AI governance mount. The EU and the U.S. both need to carry forward into legislative outcomes the commitments made at the first TTC.

This is necessary not only to safeguard our own democratic societies but also to make clear to other countries that are moving forward national AI strategies that mere technical standards are not a substitute for the rule of law.  A recent Manifesto prepared by scholars on both sides of the Atlantic called attention to concerns about the growing weakness of democratic institutions, particularly when it comes to implementing effective technology policy. The scholars warned of AI’s potential to undermine existing law and fundamental rights, and explained that there is a “growing gap between AI development and our institutions’ capabilities to properly govern them.”

Whether it will be possible for the U.S. and Europe to close that gap depends urgently on the outcome of the upcoming Trade and Technology Council meeting.

Marc Rotenberg and Merve Hickok are President and Chair of the Center for AI and Digital Policy, a global network of AI policy experts and advocates in more than 60 countries. The Center publishes the AI and Democratic Values Index, the first report to rate and rank national AI policies and practices.

Related Posts
See All

Cleaning Up 80 Years of Plastic Waste

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

ChatGPT is the Deepfake of Thought

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

The EV Revolution Brings Environmental Uncertainty at Every Turn

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...

How Do My — and Your — Greenhouse Gas Emissions Threaten Biodiversity?

As EU and U.S. leaders meet in Washington at a joint Trade and Technology Council, there is great need for a proposed “transatlantic accord on artificial intelligence.” But the two sides have differing agendas, and...