Generative AI - A Critique
John Oliver comes through again
Welcome to ai4biz, my exploration of generative AI. I’m trying to find my way through the explosion of hype and activity around generative AI through analysis and experimentation. I hope you can get something out of it too.
I made a bunch of Google Alerts for generative AI topics like ChatGPT and the number of results are positively overwhelming. It’s like everyone has already started using ChatGPT to 10x their word counts on their posts. How can I figure out where to start my investigations?
John Oliver. He has a decent budget for writers and researchers and, as always, provides some keen insight wrapped in some great jokes.
He covers so much ground, so here are my highlights:
Conversational experiments get weird really quickly:
Oliver plays a clip from Furhat (2019) with a super awkward interaction with a therapist practicing on this mannequin head with a projected face. Furhat Robotics has a YouTube channel that shows a bunch of their experiments. It seems like they have really bought into this notion of a human expression as “genuine” when every single one is really creepy and weird. The only one that doesn’t look weird is the anime character that has a wig. I could talk to that obviously fake character, but all the ones trying to hard to look “normal” are just unsettling. Even the most recent iteration just doesn’t seem practical.
He literally doesn’t need to interact with the “Hospitality Robot”. He can check in on the app and use his phone as a key. He can use Google and Uber to solve his restaurant problem. This is a solution in search of a problem.
Even just with text, conversations can get really unsettling:
The Kevin Roose story of his in-depth conversation with the ChatGPT version of Bing was really something else. You can hear it in his own words on his podcast Hard Fork.
https://www.nytimes.com/2023/02/17/podcasts/hard-fork-bing-ai-elon.html
He got Microsoft to make a lot of changes quickly, but it does beg the question, what if a vulnerable person had this conversation instead of reporter writing a story? It is difficult to process the potential dangers to a person’s mental health.
The Alignment Problem is built into the history of generative AI
Data sets can build in all kinds of bias into a generative AI system, sometimes ending up giving results that are completely racist and biased.
Feds Say Self-Driving Uber SUV Did Not Recognize Jaywalking Pedestrian In Fatal Crash - the data set didn’t have pedestrians who crossed outside of crosswalks.
Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day - The chatbot learned from people typing all the things you see trolls saying on Twitter and started incorporating those negative messages.
In conclusion - we need to understand what happens in the black box
We need to be able to understand how the models are trained and what their limits may be. So it looks like I am going to have to read The Alignment Problem by Brian Christian. How these data sets align with human values seems like a good place to start understanding generative AI.
Until next time.
Where to Next on ai4biz?
This Wednesday I’m planning to look at how ChatGPT can be used to research video material on YouTube. See you then!

