Katharine Hayhoe Teaches Climate Science From the Heart

Climate scientist Katharine Hayhoe shares her knowledge through newsletters, common interests, and her Christian faith.

Related Posts
See All

‘Explosion’ in Number of Fossil Fuel Lobbyists at Cop27 Climate Summit

Climate scientist Katharine Hayhoe shares her knowledge through newsletters, common interests, and her Christian faith.

Idealab’s Bill Gross Calls Climate a Trillion-Dollar Opportunity

Climate scientist Katharine Hayhoe shares her knowledge through newsletters, common interests, and her Christian faith.

America’s first ​‘enhanced’ geothermal plant starts up

Climate scientist Katharine Hayhoe shares her knowledge through newsletters, common interests, and her Christian faith.

Global Fisheries are Devastated by Government Subsidies

Climate scientist Katharine Hayhoe shares her knowledge through newsletters, common interests, and her Christian faith.

The Struggle to Rein in AI

Many call for regulations on AI, but progress is slow. Meanwhile, others worry they’ll inhibit innovation. What are the ethical implications?

The Thinker statue against a multi-color, tech-inspired backgrond

One warning sign was when Microsoft’s chatbot “Sydney” asked a tech reporter to leave his wife because it postulated they’d fallen in love. (Sure, the journalist goaded the bot into revealing its dark side. But it did not disappoint!) Soon after, My AI, a Snapchat feature being tested by a researcher posing as a 13-year old girl, advised her on her plans to lose her virginity to a 31-year old: Set the scene with dim lighting and candles. Then Geoffrey Hinton, a pioneer in the development of neural networks, quit his job at Google. He cited the proliferation of false information, impending mass job loss, and “killer robots,” and added that he regrets his life’s work.

Ethical and regulatory concerns around AI accelerated with the release of ChatGPT, Bard, Claude and other bots starting late last year. Issues of copyright infringement, algorithmic bias, cybersecurity dangers, and “hallucinations” (making things up) dominate the conversation. Yet proposed solutions have little to no legal traction. Will this be a replay of social media— where ethics concerns and attendant rules stagnate? Or will governments and other players take a timely, yet not too strict, stand?

Part of the problem is knowing what regulations might be necessary. One issue is with expectations for the arrival of artificial general intelligence, or AGI—meaning the machines achieve, and possibly supersede, human capabilities. Predictions of when it will occur range from several years to several decades. If and when the AGI tipping point comes, some fear the machines’ goals could get out of “alignment” with humanity’s. Hinton, for instance, fears autonomous weapons becoming those “killer robots.” In a summer 2022 survey of machine learning researchers, nearly half believed there was a 10% chance AI could lead to “human extinction.” Until these dangers are better understood, rules are hard to propose, let alone pass in government.

Another issue is that even chatbot creators have a hard time pinpointing why a “black box,” as the machines’ opaque processes are dubbed, spits out certain things and not others. In one famous early breakdown, a Google photo service labeled African-Americans “gorillas.” In another case, an AI-assisted hiring process at Amazon filtered out female candidates. Both issues were rectified, but systemic change remains challenging.

Some Promising Steps

AI companies say they are open to oversight. Sam Altman, co-founder and CEO of ChatGPT and maker OpenAI, was famously receptive to Congress’ questions and suggestions at a May hearing. He visited the Hill along with neuroscientist and AI expert Gary Marcus and IBM chief privacy and trust officer Christina Montgomery. The Senators seemed eager to get ahead of the problem, but some Congresspeople had difficulty grasping AI’s tenets. And concrete plans were nowhere in sight. For instance, beyond general suggestions, there were no detailed discussions of a regulatory agency that would issue licenses allowing companies to conduct advanced AI work.

There are potential bright spots in the private sphere. Founded in 2021 by former OpenAI creators who wanted to focus on ethical development, Anthropic employs a method known as “mechanistic interpretability.” Founder Dario Amodei describes this as “the science of figuring out what’s going on inside the models.” The startup purposely creates machines that deceive them, then studies how to stop this deception.

Congress is taking some action. The bipartisan American Data Privacy Protection Act, which would establish privacy rules around AI, was introduced last year. Mutale Nkonde, CEO and founder of AI for the People, a policy advisory firm, notes that it incorporates the concept that privacy is a civil rights issue. “If the working of this AI system does not comply with existing U.S. [civil rights] law, then we can’t use it, in the same way that we wouldn’t release a medicine or allow a food to be introduced to the American people that didn’t meet rigorous standards,” she says.

The Biden administration has released an AI Bill of Rights outlining privacy commitments—saying that data should be collected with users’ knowledge, and calling for disclosures to be readable, transparent, and brief. It also proposes protections against algorithmic bias—in which it called for building equity concerns into systems’ design, and independently testing companies’ success in this realm. The Bill of Rights is not an executive order or a proposal for law, however.

The closest the President has come to actual rule-making happened in July, with voluntary guidelines agreed to by Anthropic, OpenAI, Google, Meta, Microsoft, Amazon, and Inflection (whose founders include LinkedIn co-founder Reid Hoffman). They include engaging in cybersecurity safety testing by independent parties and employing “watermarks” that specify material has been generated by AI. (The technique is not foolproof, but shows promise.) Companies also agreed to prioritize research on preventing AI system bias.

But the European Union leads on setting guardrails. The AI Act, passed by the European Parliament in June, would rate uses of AI on a scale of riskiness and apply rules—and punishments for running afoul of them. AI is being used to run critical infrastructure like water systems and using tools like facial recognition software (which the Act strictly limits). The Act’s final wording is being negotiated with the two other major EU legislative bodies, and lawmakers hope to pass it by the end of the year.

A Growing Sense of Urgency

There’s still deep disagreement on the scope of rules. As early as 2017, China, which has drafted rules requiring chatbot creators to adhere to censorship laws, proclaimed its intention to dominate the field. Faced with that challenge, some worry about inhibiting U.S. innovation with too many regulations.

Others, however, believe allowing for proliferation of misinformation via AI, particularly leading into a U.S. election year, could threaten our very democracy. If we don’t have a functioning government, they say, tech innovation is the least of our worries.

Privacy and copyright issues gain urgency. A potential bellwether: comedian Sarah Silverman’s suit against OpenAI and Meta for training their models using her book The Bedwetter. Striking Hollywood writers and actors worry that AI could upend the industry by, say, writing screenplays largely autonomously or replacing thespians with life-like recreations based on scans bought for a one-time fee.

Nkonde sees a larger issue touching all aspects of life: studies show that citizens believe AI’s proclamations blindly. “People think these technologies, that are not finished, that do not recognize the whole of society, can make these huge decisions and shape our consciousness,” she says. “And they don’t even really know how to recognize ‘That’s a cat.’ ‘That’s a dog.’”

Tristan Harris, founder of the Center for Humane Technology, recently told a group of expert technologists that regulations need to be implemented urgently. But even Harris cautioned against pessimism and pointed to the world’s success in slowing the spread of atomic weapons and averting nuclear war. Things are moving fast with AI, and issues are maybe even more complicated. But, he told the audience,“We can still choose which future we want.”

Join us at TE23 in Lake Nona, FL, as we dive into AI development’s opportunities, threats, and challenges. From healthcare to agriculture, government to climate, transportation to finance, our expert speakers will delve into how AI will revolutionize every industry. Request an invitation.

Related Posts
See All

The Struggle to Rein in AI

Many call for regulations on AI, but progress is slow. Meanwhile, others worry they’ll inhibit innovation. What are the ethical implications?

Are AI Bot Brokers Ready to Manage Your Investments?

Many call for regulations on AI, but progress is slow. Meanwhile, others worry they’ll inhibit innovation. What are the ethical implications?

Techonomy 23 to Focus On the Promise and the Peril of AI

Many call for regulations on AI, but progress is slow. Meanwhile, others worry they’ll inhibit innovation. What are the ethical implications?

12 Energy Dilemmas the World Needs to Address

Many call for regulations on AI, but progress is slow. Meanwhile, others worry they’ll inhibit innovation. What are the ethical implications?

Are AI Bot Brokers Ready to Manage Your Investments?

Using software to suggest investment picks isn’t new, but not everybody in the industry is ready to hand their money over to generative AI.

Software never sleeps: You don’t have to worry about getting your broker on the phone if it’s a bot. And given conversational AI’s limited phone presence, you’re probably better off just texting with it. That was the sales pitch for robo-investing services a decade ago when they began going up against traditional advisors by applying algorithms to make and monitor investments.

But the rise of generative AI has enabled systems that can carry on open-ended conversations and at least appear to come up with original thoughts and creations. They are opening possibilities for more personalized financial services as AI evolves from a tool for fund managers to something that investors can interact with directly.

The bull case for that: AI will do everything robo-advising formulas did but with more nuance and greater awareness as it learns. “AI is capable of deep-learning algorithms, whereas robo advising was based on machine learning and algorithms,” writes Suchi Mishra, associate dean for faculty affairs and a professor in the finance department at Florida International University in Miami. Robo-advising will have to advance to the latest phase of AI, she says.

Investing AI in action

Q.ai, a new service from Jersey City, N.J.-based Quantalytics Holdings, pitches itself as a logical next step. It offers no-fee “investment kits” of four to 20 securities in a market sector. They are picked by an AI that assesses things like market metrics, news, Google search trends, and social-media sentiment.

As of July 7, Q.ai reported year-to-date returns for these kits that ranged from 52.36% for a cryptocurrency kit to negative 8.28% for a “Recession Resistance” offering.

ETF Managers Group’s AIEQ, launched in 2017, offers a longer history for comparison. The Summit, N.J. firm says it uses IBM’s Watson AI platform to analyze millions of data points from news, social media, industry, and analyst reports, plus financial statements on over 6,000 U.S. companies, and technical, macro, and market data, among others.

Over the last five years, the fund has returned 4.9%—trailing the 11.78% five-year return of Vanguard’s benchmark S&P 500 index fund. It also trails two large actively-managed funds: the American Funds Growth Fund of America, at 9.81%, and Fidelity’s Contrafund, at 11.04%.

Saying “research is still nascent in this area,” FIU’s Mishra pronounces herself unsure about whether AI-routed investing can beat the market. (In fact, any actively-traded fund, whether humans or bots click the “sell” buttons, can struggle to match index funds’ returns because equity sales in actively-managed funds incur capital gains taxes that don’t affect passively-managed index funds.)

Could widely distributed AI investing worsen market fluctuations? Pawan Jain, assistant professor of finance at West Virginia University in Morgantown, W.V., thinks we already live in that world.

“AI in investing has been in existence for a long period of time,” he says, pointing to how program trading (automated transactions triggered by preset conditions) accelerated the 1987 market crash, as well as the large role of high-frequency trading algorithms today.

However, the biggest fear many people evoke about AI is not the subpar performance and panicked trading that human managers already deliver. It’s the potential of new generative AI systems like ChatGPT to “hallucinate” or otherwise make stuff up.

Guardrails Required

AI investing and financial planning services often emphasize that they haven’t just handed over investor wallets to a machine-learning model.

AIEQ’s founders have noted that human employees monitor their AI output for signs of emerging bias. Art Amador, a partner in the fund, says that the company is developing a transparency tool that will allow banks and asset and wealth managers to check its data inputs and investment decisions. Q.ai, owned in part by Forbes Global Media Holdings Inc., makes similar points in its online FAQ.

Abu Dhabi-based startup Nemo uses a ChatGPT model called text-davinci-003 to let non-U.S. investors (it has yet to register in the States) ask questions they might put to a human broker. But Nemo, too, says it doesn’t let AI go off-leash.

“We guard against hallucinations by consistently reviewing the questions our users ask, the answers Nemo AI provides, and then adjusting how we train our version of the model,” spokesman Nick Scott writes in an email. “At our most recent review, we hadn’t seen any hallucinations,” he adds.

But while holding an AI system accountable may be difficult, convincing customers of the effort involved may be much harder. “It’s difficult to reverse-engineer some of the decisions that AI is making,” says Jain. “Until we know that we are wrong, and we know where we went wrong, it’s really difficult to write the code that will not make the same mistake.”

To one of the first mass-market robo-investing firms, those issues argue for confining AI to a back burner.

“All of the algorithms we use to provide advice are explainable”—meaning an expert can decipher their output—“and they’re deterministic, meaning the algorithms will produce consistent outputs given the same inputs,” writes John Mileham, chief technology officer of the pioneering robo investing firm Betterment, in an email. “Many AI systems, like ChatGPT, will fall short of these properties, which prevents us from using them directly to provide financial advice.”

One of Betterment’s tests of an unspecified AI had it flub one of the most fundamental questions about investment planning: How long will a portfolio support a retiree? “The math it ran was faulty, and it applied the logic incorrectly, which led to a bad answer,” says Nick Holeman, director of financial planning at the New York firm, in that same message.

Other investing firms continue to develop AI-based advice systems. Toggle AI aims to use an implementation of ChatGPT that it says will be programmatically constrained to stick to providing reality-grounded answers to investors’ questions. But some observers think an untiring AI can yield more benefits in the less exciting parts of financial planning—like answering client questions.

Software doesn’t get bored by seeing the same questions, says Jain. And an AI can, or at least should, get smarter as it learns from experience with each run through the data.

“I’d love an AI to automatically calculate current and expected cashflows by connecting to my accounts and ensure that I’m in line with my goals for spend, savings, etc.” says Ali Nawab, CEO and co-founder of Toronto-area startup Agentnoon, which offers AI-based management services to companies.

Possible Upsides for Newer Investors

And more people may benefit from automated help. Jain cites an Indian fintech firm, Fisdom, that uses automated tools to provide financial guidance to a broader base of customers.

“It’s opening up a sort of area where it wasn’t available for these low net-worth individuals,” he says, including ones who don’t have employer-based retirement benefits “It’s not only helping individuals get into stock market investments, but also it’s helping them save for any future needs.”

Scott, with Nemo, makes almost the same point. “The main thing we’ve come to notice is that people who previously didn’t have access to ask a human questions about investing are now able to do so,” he says.

But AI’s potential changes to the business are broad enough that even investors with a human broker on a first-name basis may see its changes. Betterment, for example, has already realized that AI is worth a spot in its back-office systems.

“Our use cases remain limited to selecting the most relevant answer to customer support questions via AI-assisted chatbots,” Mileham says. But its human staffers are a different matter. “Bettermenters already use generative AI tools to summarize meeting transcripts, diagnose and debug software issues, help draft internal communications, learn about new technologies, and more.”

So if your broker sounds less weary about work, the market might not deserve the credit. AI may have lifted a few burdens from that human’s shoulders.


Join us for Techonomy 23: The Promise and Peril of AI

Related Posts
See All

‘Geek Way’ Author Andrew McAfee On How CEOs Fumble

Using software to suggest investment picks isn’t new, but not everybody in the industry is ready to hand their money over to generative AI.

Responsible AI is Good for Business, says Advocate Mutale Nkonde

Using software to suggest investment picks isn’t new, but not everybody in the industry is ready to hand their money over to generative AI.

Vint Cerf’s Take on AI, The Metaverse, and Green Power

Using software to suggest investment picks isn’t new, but not everybody in the industry is ready to hand their money over to generative AI.

AI Could Make Humans Even Less Exceptional

Using software to suggest investment picks isn’t new, but not everybody in the industry is ready to hand their money over to generative AI.

How Companies Communicate About Climate with Jim McCann, Wendy Montgomery, and Christina Pearson

The Power Of Purpose-Driven Work with Amy Morse, Kristy Drutman, and Wawa Gatheru

Empowering Women in a Changing Climate with Mary Ellen Iskenderian

How To Keep The Lights On with Scott Case, Matt Gray, and Rachel Slaybaugh

Crafting Your Net Zero Strategy with Ann Tracy

The Super Simple Solution with Seth Godin