Attack of the Chatbots

In the early days of ChatGPT, you had to make a conscious effort to actively seek it out. Now, AI chatbots are becoming inescapable. 

November is an auspicious month. It was in November 2022 that ChatGPT was thrust into the public domain. It was also in November (2014) when Alexa became available to the public – and she was everywhere. Remember the Alexa frenzy? You could not utter the name “Alexa” on a phone call, or in a home without snapping some random digital assistant to attention. Any parent who’d named their child Alexa rued the day. There was the unforgettable AARP meets Alexa SNL skit imagining an Alexa for old folks. Voice was all the rage. We were a nation held captive. Sadly, Alexa’s primitive chatbot-like capabilities are no match for today’s chatbots, most famously Bing, Bard, and ChatGPT. like But, like those early Alexa days, AI chatbots are becoming inescapable.

In the early days of ChatGPT, you had to make a conscious effort to actively seek it out and start prompting. Other generative AI programs like Dall-e also made you come to them. Fast forward a few months and you’re suddenly finding yourself hard-pressed to keep AI at bay. In much the same way that Alexa allowed 3rd party developers to build out multitudes of apps from shopping lists to bedtime stories, AI is following suit, building AI-enabled apps atop the major large language models.

I made the mistake of adding the Chrome extension for ChatGPT named Merlin into my browser.  Now Merlin appears (gobbling half my screen real estate) to summarize any website I’m contemplating visiting. On Expedia, you can use the ChatGPT engine as your travel agent. ChatGPT-powered Instacart can provide easy recipes. Over at Klarna, an online shopping company, you can query its ChatGPT-powered backend to “find the greatest new television set” and have a shopping dialog. Open AI, the parent company of ChatGPT, just announced 70 new plug-ins. The plug-ins were developed by third-party developers so that you never have to leave what you’re doing to access ChatGPT’s superpowers. (One caveat: you must be a “Plus” paying member of ChatGPT to access these features).

Grammarly used to do an awesome job of improving my writing by checking my grammar and punctuation. The new generative AI-powered Grammarly Go goes way beyond. It asks if want my writing shortened, made funnier, more assertive, or even whether I want to add images it can find for me.

Over at Adobe, I’m now using generative AI that’s been trained on voices to eliminate background noises (dogs, trains, dishwashers, or music in an exhibit hall) from my videos.  This week Adobe introduced its own generative AI features, dubbed Firefly that will generate images and font effects as its first salvo into the AI arena. Its model was trained on Adobe’s huge catalog of images along with openly licensed content and stuff from the public domain. Again, no heading off to an outside image creation you’re working in AI without leaving Adobe.

Google I/O, the company’s annual conference for developers, should have been called Google AI Paloozza. All of Google’s product lines were reimagined with a long-drink of AI razzle-dazzle. As demonstrated during the event, Gmail will now craft answers to your emails. Maps will provide high-fidelity views of places on your journey before you set out the door. Photos will use a magic editor so that your skies are always blue and your backgrounds are always beautiful. (Up for debate – are these deep fakes or better photos?) Docs, spreadsheets … 25 Google products in all will get an AI IV-drip from Google’s large language model PaLM 2. Again, AI is at your fingertips without ever leaving your workspace.

Merlin is a chrome-extension that summarizes websites before they’re opened.

AI Is Coming for You, Not Vice Versa

The observation here is threefold. First, Amazon thus far has missed its opportunity to make Alexa into a really cool AI product. It’s sold more than 100 million Echo devices, integrated Alexa with 85,000 smart home products, and its app store now boasts over 100,000 skills. Yet the company’s AI focus has been almost exclusively on AWS. Amazon is not developing its own large language model but licensing others. But ‘come on. Wouldn’t it be so cool to have an AI-powered Chatbot you could talk to?

More importantly, if the past few months’ announcements of built-in AI capabilities into our everyday work tools keep up at this pace, AI will soon become a seamless part of everything we already do. You won’t have to travel to find your chatbot. It will be coming to everything, and I mean everything, near you.

And finally, the business model for generative AI has grown up (and quickly).  Licensing fees from developers, usage fees for AI-powered apps, Meta  toying with AI-generated ads, and giving away it’s model to third party developers at no charge. Experiment while you can, before the bill comes due.

Related Posts
See All

Citing Wildfire Risk, State Farm Will Stop Selling Home Insurance in California

In the early days of ChatGPT, you had to make a conscious effort to actively seek it out. Now, AI chatbots are becoming inescapable. 

Phoenix: Leading the Way in Autonomous Vehicle Technology with Waymo

In the early days of ChatGPT, you had to make a conscious effort to actively seek it out. Now, AI chatbots are becoming inescapable. 

Should We Pull Carbon Out of the Air with Trees, or Machines?

In the early days of ChatGPT, you had to make a conscious effort to actively seek it out. Now, AI chatbots are becoming inescapable. 

What to Read: Silicon Heartland–Transforming the Midwest from Rust Belt to Tech Belt

In the early days of ChatGPT, you had to make a conscious effort to actively seek it out. Now, AI chatbots are becoming inescapable. 

AI Dangers: Real or Human Hallucination?

The backlash to advances in artificial intelligence have come in all shapes and forms, from mild concern to dramatic over-reactions. Here’s a look at the primary concerns.

When we read history books, decades from now, November 30th of 2022 will mark a turning point. That was the day Open AI unleashed ChatGPT on an unsuspecting public. Five days later ChatGPT had 1 million users. And just like that, we entered the biggest and possibly most consequential beta test of humanity.

Reports of hallucinating, obstreperous chatbots and talk of stochastic parrots set off a media frenzy. Five months into the experiment and we have hundreds of competing AI chatbots vying to do everything from rule on legal matters, diagnose our medical conditions, invest our money, hire us for our jobs, and yes, write this column.

Like stages of grief, we are cycling through the stages of public experimentation with AI. Very publicly.

Stage 1: Marvel. “Isn’t this amazing”?

Stage 2: Mastery We can break this because we’re smarter.

Stage 3: Rational terror. “Gee, this is getting scary. They’re learning faster than I am and I have no idea where they’re learning what they know.”

Stage 4: Backlash. We’ve got to put some guardrails around this thing before it does serious damage.

The Backlash Scenarios

Backlashes come in all shapes and forms, from mild concern to dramatic over-reactions. But, it’s useful to look at the broad range of reactions we’re seeing to the birth of publicly available AI. Here’s a look at the primary concerns.

Information Will No Longer Be Free

Large language models like ChatGPT are trained on as much of the Internet as they can gobble up. Vast repositories of human information housed on places like Reddit, Twitter, Quora, Wikipedia as well as books, academic papers and crazy chats are scraped as training grounds for these models. In the past many platforms have made their Application Programming Interface (API), the method through which outside entities can incorporate program’s information, freely available.

Suddenly, freely available means “no cost” trainers for companies like Open AI. Should companies whose businesses rely on its body of knowledge be giving this away freely?  Doubtful “free” will continue as companies demand compensation for data feeding the AI beast,

The NYT recently reported that Reddit wants to begin charging companies for access its API. Twitter is also cracking down on the use of its API, which thousands of companies and independent developers use to track the millions of conversations. Microsoft announced a plan to offer large businesses private servers for a fee. That way they could keep sensitive data from being used to train ChatGPT’s language model and could also prevent inadvertent data leaks.

Then there are businesses that will begin to opt out or silo their data and communications, fearful of being part of ChatGPT’s unquenchable thirst. Companies including Samsung banned the use of generative AI tools like ChatGPT on its internal networks and company-owned devices. Financial companies including JP Morgan, Bank of America, Citigroup, Goldman Sachs, Wells Fargo and Deutsche Bank have either banned or restricted the use of AI tools.

The implications for both scenarios are vast. Large platforms have, in the past made their APIs available to the public for research, 3rd party development, and more. If they begin asking for compensation, all bets are off. The entire ecosystem of 3rd-party development could be upended. So could the entire nature of an open-Internet.

The natural extension of opting out is that every news outlet, blogger, and other contributors to the web starts either asking for compensation based on their contribution or silos its information. Information would be held hostage.

Great Minds Call for a Moratorium

Dr. Geoffrey Hinton, a Google researcher often called “The Godfather of AI,” very publicly announced he would leave Google so that he could speak more freely about the risks associated with using AI.

Before Hinton’s departure, came that well publicized “bro” letter where industry insiders including Steve Wozniak, Elon Musk, and other typically impetuous thinkers, wrote a public letter to Open AI asking the founders to pause further training on its learning model until it was better understood. That’s probably not going to happen. ChatGPT already opened pandora’s box, let the genie out of the bottle.

Regulation

Italy banned ChatGPT. So did China, Cuba, and a host of other countries. Regulators and legislators worldwide are scurrying. The EU has been leading the way for some time. In 2020, a European Commission white paper “On Artificial Intelligence—A European Approach to Excellence and Trust” and its subsequent 2021 proposal for an AI legal framework looks at risks posed by using AI. Following suit, the White House put forth its own blueprint for an AI Bill of Rights. It is not codified into any law, but rather a framework created by the White House Office of Science and Technology Policy to protect the American public in the age of artificial intelligence. This week the White House took its next move by announcing a series of steps to fund AI research institutes, systematically assess generative AI systems, and meet with the CEOs of large AI platforms in order to inform policy.

Chamath Palihapitya, the founder and CEO of Social Capital, a large venture fund, argues that  “given the emergence of so many different AI players and platforms, we need a public gatekeeper. With an effective regulatory framework, enforced by a new federal oversight body, we would be able to investigate new AI models, stress-test them for worst-case scenarios, and determine whether they are safe for use in the wild.”

Regulators are still debating what to do about everything from existing social media platforms to cryptocurrency. They are unlikely candidates to move swiftly on the AI issue.

You Can’t Regulate What You Can’t Understand

Looking elsewhere, one of the most clear-headed analysis comes from Tim O’Reilly, author and publisher of O’Reilly media. Like Palihapitya, O’Reilly penned an open letter calling for increased AI transparency and oversight, but not necessarily governmental. The first step, says O’Reilly is “robust institutions for auditing and reporting.” As a starter he asks for detailed disclosure by AI creators in some form of public ledger.

There’s also a sweet irony in all these scenarios. Technology cheerleaders who have always been leery of regulation are now begging for some regulatory body to step up to the plate. Cynics might call this enlightened self-interest, but I suspect it’s because they truly understand the magnitude of this technology.

There’s a long-standing precedent of oversight by committees when new technologies hit the market. A recent example is the Metaverse Standards Forum, which I’ve written about before, where important issues from interoperability to privacy are tackled by committee. In the world of biology and genetics, CRISPR technology is also wending its way towards a regulatory framework.

The definition of a backlash is “a strong, adverse reaction to a controversial issue.” Economic models of compensation will be worked out. An outright ban is a folly. A moratorium seems unlikely. Regulators will forever be behind the eight-ball. That leaves us with oversight, transparency, and a public ledger as the best defense. The backlash to AI is not a human hallucination. It’s real, and deservedly so.

Related Posts
See All

Attack of the Chatbots

The backlash to advances in artificial intelligence have come in all shapes and forms, from mild concern to dramatic over-reactions. Here’s a look at the primary concerns.

Climate Becomes Industrial Policy, Cities Jump Onboard as EV Popularity Surges

The backlash to advances in artificial intelligence have come in all shapes and forms, from mild concern to dramatic over-reactions. Here’s a look at the primary concerns.

To Transform Healthcare, Embrace Responsible Innovation

The backlash to advances in artificial intelligence have come in all shapes and forms, from mild concern to dramatic over-reactions. Here’s a look at the primary concerns.

The New Pangenome Project Just Unlocked the Future of Precision Medicine

The backlash to advances in artificial intelligence have come in all shapes and forms, from mild concern to dramatic over-reactions. Here’s a look at the primary concerns.

How Digital Transformation Can Help Mitigate Climate Change with Susan Kenniston, Scott Atkinson, and Ellen Jackowski

Design, Decarbonization, and the Built Environment with Joe Speicher

Joe Speicher, Vice President of ESG & Impact at Autodesk, believes the built environment significantly impacts climate change, and decarbonization is essential to mitigate this impact. He emphasizes collaboration across the design and construction industries […]

Joe Speicher, Vice President of ESG & Impact at Autodesk, believes the built environment significantly impacts climate change, and decarbonization is essential to mitigate this impact. He emphasizes collaboration across the design and construction industries to achieve decarbonization goals. He also highlights the importance of leveraging technology and data to inform sustainable design decisions and track progress toward decarbonization.

Related Posts
See All

Citing Wildfire Risk, State Farm Will Stop Selling Home Insurance in California

Joe Speicher, Vice President of ESG & Impact at Autodesk, believes the built environment significantly impacts climate change, and decarbonization is essential to mitigate this impact. He emphasizes collaboration across the design and construction industries...

Phoenix: Leading the Way in Autonomous Vehicle Technology with Waymo

Joe Speicher, Vice President of ESG & Impact at Autodesk, believes the built environment significantly impacts climate change, and decarbonization is essential to mitigate this impact. He emphasizes collaboration across the design and construction industries...

Should We Pull Carbon Out of the Air with Trees, or Machines?

Joe Speicher, Vice President of ESG & Impact at Autodesk, believes the built environment significantly impacts climate change, and decarbonization is essential to mitigate this impact. He emphasizes collaboration across the design and construction industries...

What to Read: Silicon Heartland–Transforming the Midwest from Rust Belt to Tech Belt

Joe Speicher, Vice President of ESG & Impact at Autodesk, believes the built environment significantly impacts climate change, and decarbonization is essential to mitigate this impact. He emphasizes collaboration across the design and construction industries...

Techonomy Climate 2023 – Live

Techonomy Climate 2023 – VIRTUAL We explored how the climate crisis affects every corner of the world and presents unprecedented challenges in Silicon Valley on March 28th. Watch the full livestream. Explore the full agenda and […]

Techonomy Climate 2023 – VIRTUAL

We explored how the climate crisis affects every corner of the world and presents unprecedented challenges in Silicon Valley on March 28th. Watch the full livestream. Explore the full agenda and speakers

 

Related Posts
See All

Citing Wildfire Risk, State Farm Will Stop Selling Home Insurance in California

Techonomy Climate 2023 – VIRTUAL We explored how the climate crisis affects every corner of the world and presents unprecedented challenges in Silicon Valley on March 28th. Watch the full livestream. Explore the full agenda and...

Phoenix: Leading the Way in Autonomous Vehicle Technology with Waymo

Techonomy Climate 2023 – VIRTUAL We explored how the climate crisis affects every corner of the world and presents unprecedented challenges in Silicon Valley on March 28th. Watch the full livestream. Explore the full agenda and...

Should We Pull Carbon Out of the Air with Trees, or Machines?

Techonomy Climate 2023 – VIRTUAL We explored how the climate crisis affects every corner of the world and presents unprecedented challenges in Silicon Valley on March 28th. Watch the full livestream. Explore the full agenda and...

What to Read: Silicon Heartland–Transforming the Midwest from Rust Belt to Tech Belt

Techonomy Climate 2023 – VIRTUAL We explored how the climate crisis affects every corner of the world and presents unprecedented challenges in Silicon Valley on March 28th. Watch the full livestream. Explore the full agenda and...

TikTok: Weapon of Destruction or Distraction

It’s time to consider that at its heart TikTok is a cultural tool of war attempting to lower the collective IQ of a new generation of American youth.

Fess up. You’ve mindlessly swiped through short-form TikTok videos at some point. You’ve seen the bathroom-mirror dances, midriff shirts and scanty shorts, death defying skateboard tricks, silly pet tricks, and every wannabe creator. Mindless fun or alarmingly addictive?

Unlike other social media platforms TikTok is different for two reasons.  First, its powerful algorithm knows your behavior with uncanny precision—what you swipe on, how long your swipe, when you swipe, and more behavioral indicators to ensure you stay riveted to your screen with personalized content. Second, is its ownership. Byte Dance, TikTok’s parent company, is a Chinese company and many fear that TikTok is subject to the same data audits that the Chinese government conducts on Byte Dance in China. This could include collecting location or personal data, for example. There is also concern that the Chinese government could be serving up propaganda sandwiched between make-up tips and teenage bedroom angst.

Many parts of the U.S. government have already banned TikTok from being used on government issued phones. New Zealand, the UK, Canada and other European countries have followed suit. And in a rare bipartisan showing, US legislators on both sides are increasingly vocal of the dangers that TikTok poses and want some regulation–now.

The Geopolitical Landscape

Chinese and U.S relationships are already shaky thanks to allegations of Chinese spy balloons and more recent drone/ fighter plane encounters. Relations are strained over the Ukraine war  as well. A ban on TikTok in the U.S. could certainly be considered another way to increase the political divide, further testing an already strained relationship.

Most recently the Biden Administration declared it wanted TikTok U.S. sold to an American entity and completely separated from its Chinese roots, or it would consider an outright ban.  (The Trump administration went down this path once before but TikTok took the case to the courts.) It seems that while there’s some evidence of data access by China, this may not be the ideal moment for US provocation, especially when TikTok has demonstrated some commitment to negotiation and compromise. (Watch this fascinating interview between the NYT’s Andrew Sorkin and TikTok’s U.S. CEO Shou Chew where Chew deftly fields tough questions..)

Two Different Toks

I’m convinced that not only is this the wrong moment for a TikTok ban but we maybe barking up the wrong tree. It may be time to look culturally inward for answers. China’s secret weapon is not data, but the “soft power” of cultural exports like TikTok.

The TikTok product in China (especially the kids version) is a very different beast. While U.S. youth are merrily scrolling, their young Chinese counterparts are seeing a very different TikTok, one filled with science experiments, homework help, and insights into the world we inhabit. A 2022 60 Minutes report interviews Tristan Harris, co-founder of the Center for Humane Technology. In it he reveals the stark contrast between the mind-numbing U.S. content and the mind-expanding content available on TikTok in China, especially for children under 14 years old. As Harris says, “TikTok, the domestic version in China is like spinach; TikTok served up to American audiences is more like opium.”

The Chinese government has also imposed strict standards on kids’ use of technology. While viewed as authoritarian, a Chinese law introduced last August restricts children under the age of 18 to one hour of video games from 8 p.m. to 9 p.m. on Fridays, Saturdays, and Sundays. They can also play for one hour during national holidays.

Contrast this to American kids who spend on average of 95 minutes per day (over 1.5 hours) playing video games and could be scrolling TikTok during chemistry class or from beneath their bed covers at  3AM. (There are parental controls in the TikTok’s setting for parents to use but if past experience informs us, parents are not likely to enforce these).

The outcomes of these two different TikToks are becoming more evident, says Harris in the interview. “When American kids were asked what they wanted to be when they grew up “influencer” was high on the list. For Chinese youth it was “astronaut”.”

It’s time to consider that at its heart TikTok is simply a cultural tool of war attempting to lower the collective IQ of a new generation of American youth. And that begs the question, why take aim on us, when we’re perfectly capable of taking aim at ourselves, feeding American youth a diet of garbage and unhealthy role models?

It’s convenient, maybe even prudent, to point fingers at a product born in and controlled by a  country we distrust. But it may be more productive to do a little soul searching, a little negotiating with the TikTok’s management who seems willing to find compromise, and a lot of work on algorithms that might reward something loftier than a fart joke or a Tide Pod challenge.

Take a quick scroll through TikTok’s top trending shorts on any given day.  There’s so much talent, but even more stupidity. Different algorithms and some self-control about time limits might serve us better than bans.

Related Posts
See All

Attack of the Chatbots

It’s time to consider that at its heart TikTok is a cultural tool of war attempting to lower the collective IQ of a new generation of American youth.

Climate Becomes Industrial Policy, Cities Jump Onboard as EV Popularity Surges

It’s time to consider that at its heart TikTok is a cultural tool of war attempting to lower the collective IQ of a new generation of American youth.

To Transform Healthcare, Embrace Responsible Innovation

It’s time to consider that at its heart TikTok is a cultural tool of war attempting to lower the collective IQ of a new generation of American youth.

The New Pangenome Project Just Unlocked the Future of Precision Medicine

It’s time to consider that at its heart TikTok is a cultural tool of war attempting to lower the collective IQ of a new generation of American youth.

The Art of the AI Prompt

Mastering the new art and science of Generative AI is all about the power of a good prompt, breaking down the big vision of what you’d like to see and encapsulating it into as few words as possible.

Now that the initial knee-jerk reactions to having Generative AI as our companions have quieted down a bit, it’s time to get to work and master the skills so that Generative AI is working for us, not the reverse. Last week I wrote about the Kevin Roose shockwave, that forced every tech columnist to write something about how they broke AI, through a combination of provocation and beta testing the hell out of publicly released platforms like Bing AI, Google’s Bard, and the wildly popular ChatGPT.

In the early days of ChatGPT’s general release, Cnet had some faux pas including plagiarism and misinformation seeping into its AI-generated journalism.  This week Wired magazine very carefully spelled out its internal rules for how it will incorporate generative AI in its journalism.  (Not for images, yes for research, no for copyediting, maybe for idea generation.) Educational institutions are trying to figure out whether to ban it from or teach it to their students. We’re rolling up our collective sleeves for the human/machine beta test.

Meanwhile, folks like Evo Heyning, Creator/Founder at Realitycraft.Live, and author of a lovely interactive book called PromptCraft has been doubling down to dissect, coach, and cheer us into the world of using Generative AI effectively. The book, co-written with a slew of AI companions like Midjourney, Stable Diffusion, ChatGPT, and more, looks at the art, science, and lots of iterations that will help get the most out of the creative man/machine communications. You can watch some of her fast-paced Promptcraft lessons on YouTube. They’re kind of the AI-generative of the arti-isodes of Bob Ross on PBS.

A Magic Mirror for Collective Intelligence

Heyning has worked in AI, as a coder, storyteller, and world-builder since the early days of experimentation. She’s also been a monk, chaplain, and just about everything else that defines a renaissance woman who thinks deeply about AI. “Are the models merely stochastic parrots that spit back our own model? Or are they giving us something that’s a deeper level of comprehension? she asks. 

A prompt and resulting output using Midjourney.

AI, “ she continues, “is like querying our collective intelligence. Right now most of our chat tools are mirrors of everything that they’ve experienced. They’re closer to asking a magic mirror about collective intelligence than they are about any sort of unique intelligence.”

Our jobs are to learn the language or the query to coax the best out of the machine.  “AI Whisperers,” those who can create, frame, and refine prompts are out of the gate with a valued skillset.

While prompts for generating text, images, movies, and music will vary there are certain commonalities. “A prompt”, says Heyning, “starts by breaking down the big vision of what you’d like to see created, encapsulating it into as few words as possible.”  She likens a lot of the process to a cinematographer calling the shots. “You’re thinking about what the focal point of your creation will be. The world of the prompt is about our relationships with AI, and it includes shifts in language that come from both sides, not just from the human side, but also from alternative intelligences.”

Five Easy Pieces

Heyning talks about her process of including five pieces in a prompt. They include the type of content, the description of that content, the composition, the style, and the parameters.

  • Content Type: In art prompts, the type of content might be a poster or a sticker. For text it might be a letter or a research paper.
  • Description: The description of the content defines your scene (a frog on a lily pond).
  • Composition: The composition is the equivalent of your instruction in a movie (frog gulping down a fly or in the bright sunshine).
  • Style: The style might be pointillism, (or for text the style of comedy writing).
  • Parameters: The parameter might be landscape or portrait, or, for text a word count.

Providing context  is also a key component. Details about the setting, characters, and mood help you get the image you had in your mind’s eye. “Negative weights” — things that should not be in your creation can be important, too. Heyning discourages the use of using artists, especially living artists, names in the prompt. These derivatives beg copyright questions and remind us to use commas in our prompts to make them more intelligible to the machine  “They act as separators to help the generator parse a scene.”

Heyning’s quite the optimist about how humans and AI will work together, even in much-debated areas like education. “Kids are learning about art history from reading prompts created using  Midjourney,” she marvels. “They are introduced to impressionism, realism and abstract art. They’re using terms like knolling (knolling is the act of arranging different objects so that they are at 90-degree angles from each other, then photographing them from above), once relegated to the realm of trained graphic designers.”

What I learned from my crash course in prompting? The power of a good prompt is the power of parsimonious thinking — getting to the essence of what you want to create.  Similar to coding, but different, because you don’t need to learn a foreign language, this is a much more Zen-like effort. Stripping away all that’s unnecessary; down to the perfect phrase. (PS. If you prompt ChatGPT to tell you how to write the perfect prompt you’ll read even more about the Art of the Prompt.)

Related Posts
See All

Attack of the Chatbots

Mastering the new art and science of Generative AI is all about the power of a good prompt, breaking down the big vision of what you’d like to see and encapsulating it into as few...

Climate Becomes Industrial Policy, Cities Jump Onboard as EV Popularity Surges

Mastering the new art and science of Generative AI is all about the power of a good prompt, breaking down the big vision of what you’d like to see and encapsulating it into as few...

To Transform Healthcare, Embrace Responsible Innovation

Mastering the new art and science of Generative AI is all about the power of a good prompt, breaking down the big vision of what you’d like to see and encapsulating it into as few...

The New Pangenome Project Just Unlocked the Future of Precision Medicine

Mastering the new art and science of Generative AI is all about the power of a good prompt, breaking down the big vision of what you’d like to see and encapsulating it into as few...

Collective Neurosis in the Age of AI

It’s been a helluva month as Google and Microsoft unleash AI-powered search to a wary public.

Collectively, we’re glued to our screens, watching the birth of the age of AI.  With Bing AI Chat and Google Bard unleashed for public experimentation there’s the same sort of riveted-meets-terrified that you get from watching a horror movie.  I like to say that either we’ll use AI chatbots as our therapists, or we’ll start seeing a therapist because of using our AI chatbot. Publishers, coders, artists, advertisers, film writers, essayists, marketers, lawyers … every ilk of white-collar workers are furiously entering their prompts, wondering when their job might succumb to the bots.

The cover of the digital edition of Time Magazine asking ChatGPT for a critique., released Feb. 16, 2023

Tech is only half the problem.  The other half is human, documenting encounters of AI gone wild: hallucinations, threats, misinformation, plagiarism, and pugilism. We love to bang on new shiny toys, but with AI, there’s a human’s delight in getting AI to misbehave. Looking back over the past two weeks it’s hard to say whether there are more articles written using an AI generator like ChatGPT or more articles written about using it.

The Kevin Roose AfterSchock

I’m going to pin the start of February’s AI hysteria on Kevin Roose a columnist at the New York Times. Roose went full drama by engaging in a lengthy conversation with Bing. Over a two-hour chat, Bing’s chatbot persona “Sydney” told him she’s in love with him, tried to convince him to divorce his wife, and talked about unleashing lethal viruses on the world. (Good thing he had to go to bed in time for work in the AM.)

Roose’s alarmist reaction to conversational AI search was not a solo act.  The Atlantic (prematurely) call Bing and Google’s chatbots “disasters,” and also cast a fair share of blame on humans for anthropomorphizing AI. The Washington Post jumped in with a spotlight on Bing’s “bizarre, dark and combative alter ego,” wondering if the product was ready for prime time. Think of the Bing chatbot as “autocomplete on steroids,” said Gary Marcus, an AI expert and professor emeritus of psychology and neuroscience at New York University, who contributed to the story. “It doesn’t really have a clue what it’s saying, and it doesn’t really have a moral compass,” says Marcus.

Shelly Palmer rightfully points out that we’re trying to ascribe human intelligence to something that is not, playfully suggesting we may need couples counseling between ourselves and our AI assistants in the near future. Over at Tom’s Hardware, Avram Piltch found Bing was naming names and threatening legal action. Ars Technica looked at the conundrum with a more amusing eye toward ChatGPT results. PC Magazine’s Michael Muchmore implores us not to throw the baby out with the bath water, giving AI one more chance after a rocky rollout. Time Magazine’s digital issue used an animated GPT as its cover and focused on how humans could be collateral in the war between the AIs.

The best analogy about the state of AI today comes from the New Yorker, that deftly likens AI learning to a blurry JPEG. The gist of the article is that large learning models ingest all of the text of the web, compress it, reduce it, and reassemble it into something much, much smaller, with much less resolution. In short, a blurry picture of the truth.

Microsoft’s response to the hoopla has been to dial back on how long a conversation you can have and how many conversations you can have with Bing, promising to reinstate longer chats once the system has matured. (When I asked Bing what it was going to do about the Kevin Roose problem. It answered: I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience. 🙏

Expect the rules of the game to remain pretty fluid.

Same, Only Different

Lumping all these reactions into a single AI-serving can be a little misleading. ChatGPT has been available since Nov 2022. It was developed by OpenAI and reached a million users in its first month of public use. Microsoft put a ton of investment (rumors of  $10 billion dollars) into its development. Microsoft’s Bing AI uses ChatGPT’s large language modelling.

Google”s AI chatbot, Bard launched earlier this month. Bard was trained on a different model of AI learning based on Google’s Language Model for Dialogue Applications (LaMDA). Google Bard and Microsoft Bing will both be able to access and provide information from current up-to-date data, whereas ChatGPT is limited to the data that it was trained on before the end of 2021. China is about to release its AI chatbot based on its Baidu platform. But some companies, namely Apple, aren’t rushing to market, recognizing that he who gets there first is not always he who gets there best, says Tim Bajarin at Forbes.

At What Cost?

We’re birthing this AI baby in the public eye. Not since the Internet itself have we unleashed such a powerful tool into the hands of so many. For the moment our experiments as AI guinea pig experiments are free of charge. That won’t be the case for long. OpenAI spends around $3 million per month to continue running ChatGPT (that’s around $100,000 per day). While the business models are still being developed, expect to pay dearly for the privilege of being confounded to the point of neurosis in the near future.

Related Posts
See All

Citing Wildfire Risk, State Farm Will Stop Selling Home Insurance in California

It’s been a helluva month as Google and Microsoft unleash AI-powered search to a wary public.

Phoenix: Leading the Way in Autonomous Vehicle Technology with Waymo

It’s been a helluva month as Google and Microsoft unleash AI-powered search to a wary public.

Should We Pull Carbon Out of the Air with Trees, or Machines?

It’s been a helluva month as Google and Microsoft unleash AI-powered search to a wary public.

What to Read: Silicon Heartland–Transforming the Midwest from Rust Belt to Tech Belt

It’s been a helluva month as Google and Microsoft unleash AI-powered search to a wary public.

ChatGPT is the Deepfake of Thought

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

I never thought I would question where ‘thought’ came from. All my life, there was only one source for opinion and creative thinking–the human mind. That changed with the invention of ChatGPT and other forms of generative AI. For the first time in human history, society will interact with thoughts and ideas not formed by people. So, what does this mean?

The “positives” we could focus on are its usefulness for generating reports and recaps based on technically objective information. But that is a very short-sighted view of what this AI can do. We need to understand that that’s not the end of the parameter or where the problems exist. 

In actuality, generative AIs are in the business of imitation and deception. Specifically, the imitation of human likeness through thought, thus deceiving its readers. From a social health standpoint, the implications are worrying at best. 

Society’s intellectual nucleus is founded on the legacy of human thought. The information generated by AI made to imitate human thought holds the potential to corrupt an already fragile social climate as it’s not based on human experience, aka the root of the idea. 

Now, we could say that AI cannot technically create new thoughts if it’s pulling its information from sources written by human beings. And at this point–I agree. But over time, as this AI generates and publishes “thoughts” and information globally, it will begin to pull from sources not created by humans. That’s where the threat to our social sphere comes into play. Once AI starts to pull from AI, there begins a decline in the potency of human thought.

Another point of contention is the idea that this kind of AI would be used, if not at least marketed, as a genuinely unbiased and objective fact checker. Here’s why that’s problematic: any form of AI is created by a person–meaning it comes with that person’s unconscious biases. Therefore, it is just as impossible for it to be truly neutral, no different than a human. Never mind that the idea implies that we would depend on a computer to “tell us the truth.”  Furthermore, since it’s an imitation of human thought, where does ethical and moral judgment come into play? As it can’t create its judgment through its version of the experience, the concept of judgment must be programmed via the parameters of its creator. This means that an AI could be programmed to agree with any archaic ideology that would further lead to the corruption of the social sphere, especially if given the title of “fact checker.” 

Unfortunately, unlike deep fakes, generative AI is easy to use and find. As far as you know, I could have co-authored this with one. This points out another way this technology is deceptive. Currently we have no way to tell if the words we ingest are the product of a human behind a keyboard. We’ve never had to ask the question before. Now that we do, we need an identifier for what is and isn’t human thought.

Simply put, ChatGPT, and other forms of generative AI, are no more than deep fakes of human thought. Except imitating and deceiving with ideas instead of a face.  

Related Posts
See All

Attack of the Chatbots

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

Climate Becomes Industrial Policy, Cities Jump Onboard as EV Popularity Surges

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

To Transform Healthcare, Embrace Responsible Innovation

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?

The New Pangenome Project Just Unlocked the Future of Precision Medicine

With the birth of generative AI, we can now interact with thoughts and ideas not formed by people. But what does this mean for social health?