Katharine Hayhoe Teaches Climate Science From the Heart
Climate scientist Katharine Hayhoe shares her knowledge through newsletters, common interests, and her Christian faith.
Climate scientist Katharine Hayhoe shares her knowledge through newsletters, common interests, and her Christian faith.
Many call for regulations on AI, but progress is slow. Meanwhile, others worry they’ll inhibit innovation. What are the ethical implications?
One warning sign was when Microsoft’s chatbot “Sydney” asked a tech reporter to leave his wife because it postulated they’d fallen in love. (Sure, the journalist goaded the bot into revealing its dark side. But it did not disappoint!) Soon after, My AI, a Snapchat feature being tested by a researcher posing as a 13-year old girl, advised her on her plans to lose her virginity to a 31-year old: Set the scene with dim lighting and candles. Then Geoffrey Hinton, a pioneer in the development of neural networks, quit his job at Google. He cited the proliferation of false information, impending mass job loss, and “killer robots,” and added that he regrets his life’s work.
Ethical and regulatory concerns around AI accelerated with the release of ChatGPT, Bard, Claude and other bots starting late last year. Issues of copyright infringement, algorithmic bias, cybersecurity dangers, and “hallucinations” (making things up) dominate the conversation. Yet proposed solutions have little to no legal traction. Will this be a replay of social media— where ethics concerns and attendant rules stagnate? Or will governments and other players take a timely, yet not too strict, stand?
Part of the problem is knowing what regulations might be necessary. One issue is with expectations for the arrival of artificial general intelligence, or AGI—meaning the machines achieve, and possibly supersede, human capabilities. Predictions of when it will occur range from several years to several decades. If and when the AGI tipping point comes, some fear the machines’ goals could get out of “alignment” with humanity’s. Hinton, for instance, fears autonomous weapons becoming those “killer robots.” In a summer 2022 survey of machine learning researchers, nearly half believed there was a 10% chance AI could lead to “human extinction.” Until these dangers are better understood, rules are hard to propose, let alone pass in government.
Another issue is that even chatbot creators have a hard time pinpointing why a “black box,” as the machines’ opaque processes are dubbed, spits out certain things and not others. In one famous early breakdown, a Google photo service labeled African-Americans “gorillas.” In another case, an AI-assisted hiring process at Amazon filtered out female candidates. Both issues were rectified, but systemic change remains challenging.
AI companies say they are open to oversight. Sam Altman, co-founder and CEO of ChatGPT and maker OpenAI, was famously receptive to Congress’ questions and suggestions at a May hearing. He visited the Hill along with neuroscientist and AI expert Gary Marcus and IBM chief privacy and trust officer Christina Montgomery. The Senators seemed eager to get ahead of the problem, but some Congresspeople had difficulty grasping AI’s tenets. And concrete plans were nowhere in sight. For instance, beyond general suggestions, there were no detailed discussions of a regulatory agency that would issue licenses allowing companies to conduct advanced AI work.
There are potential bright spots in the private sphere. Founded in 2021 by former OpenAI creators who wanted to focus on ethical development, Anthropic employs a method known as “mechanistic interpretability.” Founder Dario Amodei describes this as “the science of figuring out what’s going on inside the models.” The startup purposely creates machines that deceive them, then studies how to stop this deception.
Congress is taking some action. The bipartisan American Data Privacy Protection Act, which would establish privacy rules around AI, was introduced last year. Mutale Nkonde, CEO and founder of AI for the People, a policy advisory firm, notes that it incorporates the concept that privacy is a civil rights issue. “If the working of this AI system does not comply with existing U.S. [civil rights] law, then we can’t use it, in the same way that we wouldn’t release a medicine or allow a food to be introduced to the American people that didn’t meet rigorous standards,” she says.
The Biden administration has released an AI Bill of Rights outlining privacy commitments—saying that data should be collected with users’ knowledge, and calling for disclosures to be readable, transparent, and brief. It also proposes protections against algorithmic bias—in which it called for building equity concerns into systems’ design, and independently testing companies’ success in this realm. The Bill of Rights is not an executive order or a proposal for law, however.
The closest the President has come to actual rule-making happened in July, with voluntary guidelines agreed to by Anthropic, OpenAI, Google, Meta, Microsoft, Amazon, and Inflection (whose founders include LinkedIn co-founder Reid Hoffman). They include engaging in cybersecurity safety testing by independent parties and employing “watermarks” that specify material has been generated by AI. (The technique is not foolproof, but shows promise.) Companies also agreed to prioritize research on preventing AI system bias.
But the European Union leads on setting guardrails. The AI Act, passed by the European Parliament in June, would rate uses of AI on a scale of riskiness and apply rules—and punishments for running afoul of them. AI is being used to run critical infrastructure like water systems and using tools like facial recognition software (which the Act strictly limits). The Act’s final wording is being negotiated with the two other major EU legislative bodies, and lawmakers hope to pass it by the end of the year.
There’s still deep disagreement on the scope of rules. As early as 2017, China, which has drafted rules requiring chatbot creators to adhere to censorship laws, proclaimed its intention to dominate the field. Faced with that challenge, some worry about inhibiting U.S. innovation with too many regulations.
Others, however, believe allowing for proliferation of misinformation via AI, particularly leading into a U.S. election year, could threaten our very democracy. If we don’t have a functioning government, they say, tech innovation is the least of our worries.
Privacy and copyright issues gain urgency. A potential bellwether: comedian Sarah Silverman’s suit against OpenAI and Meta for training their models using her book The Bedwetter. Striking Hollywood writers and actors worry that AI could upend the industry by, say, writing screenplays largely autonomously or replacing thespians with life-like recreations based on scans bought for a one-time fee.
Nkonde sees a larger issue touching all aspects of life: studies show that citizens believe AI’s proclamations blindly. “People think these technologies, that are not finished, that do not recognize the whole of society, can make these huge decisions and shape our consciousness,” she says. “And they don’t even really know how to recognize ‘That’s a cat.’ ‘That’s a dog.’”
Tristan Harris, founder of the Center for Humane Technology, recently told a group of expert technologists that regulations need to be implemented urgently. But even Harris cautioned against pessimism and pointed to the world’s success in slowing the spread of atomic weapons and averting nuclear war. Things are moving fast with AI, and issues are maybe even more complicated. But, he told the audience,“We can still choose which future we want.”
Join us at TE23 in Lake Nona, FL, as we dive into AI development’s opportunities, threats, and challenges. From healthcare to agriculture, government to climate, transportation to finance, our expert speakers will delve into how AI will revolutionize every industry. Request an invitation.
Using software to suggest investment picks isn’t new, but not everybody in the industry is ready to hand their money over to generative AI.
Software never sleeps: You don’t have to worry about getting your broker on the phone if it’s a bot. And given conversational AI’s limited phone presence, you’re probably better off just texting with it. That was the sales pitch for robo-investing services a decade ago when they began going up against traditional advisors by applying algorithms to make and monitor investments.
But the rise of generative AI has enabled systems that can carry on open-ended conversations and at least appear to come up with original thoughts and creations. They are opening possibilities for more personalized financial services as AI evolves from a tool for fund managers to something that investors can interact with directly.
The bull case for that: AI will do everything robo-advising formulas did but with more nuance and greater awareness as it learns. “AI is capable of deep-learning algorithms, whereas robo advising was based on machine learning and algorithms,” writes Suchi Mishra, associate dean for faculty affairs and a professor in the finance department at Florida International University in Miami. Robo-advising will have to advance to the latest phase of AI, she says.
Q.ai, a new service from Jersey City, N.J.-based Quantalytics Holdings, pitches itself as a logical next step. It offers no-fee “investment kits” of four to 20 securities in a market sector. They are picked by an AI that assesses things like market metrics, news, Google search trends, and social-media sentiment.
As of July 7, Q.ai reported year-to-date returns for these kits that ranged from 52.36% for a cryptocurrency kit to negative 8.28% for a “Recession Resistance” offering.
ETF Managers Group’s AIEQ, launched in 2017, offers a longer history for comparison. The Summit, N.J. firm says it uses IBM’s Watson AI platform to analyze millions of data points from news, social media, industry, and analyst reports, plus financial statements on over 6,000 U.S. companies, and technical, macro, and market data, among others.
Over the last five years, the fund has returned 4.9%—trailing the 11.78% five-year return of Vanguard’s benchmark S&P 500 index fund. It also trails two large actively-managed funds: the American Funds Growth Fund of America, at 9.81%, and Fidelity’s Contrafund, at 11.04%.
Saying “research is still nascent in this area,” FIU’s Mishra pronounces herself unsure about whether AI-routed investing can beat the market. (In fact, any actively-traded fund, whether humans or bots click the “sell” buttons, can struggle to match index funds’ returns because equity sales in actively-managed funds incur capital gains taxes that don’t affect passively-managed index funds.)
Could widely distributed AI investing worsen market fluctuations? Pawan Jain, assistant professor of finance at West Virginia University in Morgantown, W.V., thinks we already live in that world.
“AI in investing has been in existence for a long period of time,” he says, pointing to how program trading (automated transactions triggered by preset conditions) accelerated the 1987 market crash, as well as the large role of high-frequency trading algorithms today.
However, the biggest fear many people evoke about AI is not the subpar performance and panicked trading that human managers already deliver. It’s the potential of new generative AI systems like ChatGPT to “hallucinate” or otherwise make stuff up.
AI investing and financial planning services often emphasize that they haven’t just handed over investor wallets to a machine-learning model.
AIEQ’s founders have noted that human employees monitor their AI output for signs of emerging bias. Art Amador, a partner in the fund, says that the company is developing a transparency tool that will allow banks and asset and wealth managers to check its data inputs and investment decisions. Q.ai, owned in part by Forbes Global Media Holdings Inc., makes similar points in its online FAQ.