Generative models like ChatGPT and Stable Diffusion have made waves around the world. Experts on machine learning and even different fields are weighing in on the implications of such powerful models. How will these change industries?
I’ve talked about it in a previous blog, but now, more than ever, ethicists are required to bridge the gap between society and the AI crowd. I am not an ethicist myself, but as an AI practitioner, I feel that I should post about my thoughts in the hopes that someone would understand.
Either that, or a crawler will pick up my text and feed a little wisdom to AI algorithms. (Hah! All I’ll do is teach it ‘ethics, society, AI.’)
ChatGPT will ace homeworks
Consider the following text prompt.
Me:
How did the Ilustrados like Jose Rizal and Marcelo del Pilar influence the Philippine Revolution against Spain?
ChatGPT:
The ilustrados were a group of educated and wealthy Filipinos who were inspired by the ideals of the European Enlightenment. Jose Rizal and Marcelo del Pilar were two prominent ilustrados who used their education and influence to advocate for political reform and the end of Spanish colonial rule in the Philippines.
Rizal, in particular, is considered a national hero in the Philippines for his role in advocating for reform and ultimately inspiring the Philippine Revolution. He was a prolific writer who used his works to critique the abuses of the Spanish colonial government and to promote the rights and welfare of the Filipino people.
Del Pilar, on the other hand, was a journalist and political activist who founded the Diariong Tagalog, a newspaper that served as a platform for ilustrado ideas and criticism of Spanish rule. He was also a leader of the Propaganda Movement, a group of Filipino reformists who sought to pressure the Spanish government to grant more rights and freedoms to the Filipino people.
Both Rizal and del Pilar played a significant role in inspiring the Philippine Revolution, which began in 1896 and ultimately led to the end of Spanish colonial rule in the Philippines. They used their education, influence, and writing to advocate for reform and to raise awareness of the injustices of colonial rule, and their efforts helped to build support for the revolutionary movement.
As you can see, that’s at least a B homework right there. Within 10 seconds, a student can turn this in. With a little more effort, the student can also mix in his own words to avoid plagiarization.
And it’s not only capable of packaging historical facts, it can also produce numerical computations. See this physics question.
Me:
The Top Thrill Dragster stratacoaster at Cedar Point Amusement Park in Ohio uses a hydraulic launching system to accelerate riders from 0 to 54 m/s (120 mi/hr) in 3.8 seconds before climbing a completely vertical 420-foot hill . Determine the net force required to accelerate an 86-kg man.
ChatGPT:
To determine the net force required to accelerate an 86-kg man to 54 m/s in 3.8 seconds, we can use the formula for force, which is equal to mass times acceleration. Since acceleration is the rate of change of velocity, we can also express the force as the product of the mass, the change in velocity, and the time over which that change occurs. This gives us the following formula:
F = ma = m * (Δv / t)
In this case, the mass is 86 kg, the change in velocity is 54 m/s, and the time is 3.8 seconds. Plugging these values into the formula, we get:
F = 86 kg * (54 m/s / 3.8 s)
Solving for the force, we get:
F = 86 kg * 14.21 m/s2 = 1220.86 N
Therefore, the net force required to accelerate an 86-kg man to 54 m/s in 3.8 seconds is approximately 1220.86 N.
ChatGPT can dish out articles like a copywriter
It’s an easy jump to making it generate articles for ad marketing. Here, I wanted it to create a quick article for me. Again, with a few modifications, and adding an interview or two, this becomes an acceptable material fo marketing. (I’m not affiliated with Lindt!)
Me:
Write me a feature article on the benefits of dark chocolate while also mentioning Lindt a lot.
ChatGPT
Dark chocolate has long been celebrated for its rich flavor and potential health benefits. In recent years, numerous studies have shown that consuming dark chocolate can have positive effects on heart health, brain function, and even weight loss.
One of the key benefits of dark chocolate is its high antioxidant content. Antioxidants are compounds that help protect the body from damage caused by harmful molecules known as free radicals. These molecules can cause oxidative stress, which has been linked to a number of chronic diseases, including heart disease and cancer.
Dark chocolate is also a good source of several important nutrients, including iron, magnesium, and zinc. It also contains fiber, which is important for maintaining a healthy digestive system.
Perhaps most importantly, dark chocolate is delicious! It’s the perfect indulgence for chocolate lovers who want to enjoy their favorite treat without feeling guilty.
Among the many brands of dark chocolate available on the market, Lindt is a standout. Founded in Switzerland in 1845, Lindt is known for its high-quality ingredients and sophisticated flavor profiles. Lindt’s dark chocolate bars come in a variety of flavors, including intense mint and sea salt, providing something for every taste.
In addition to its delicious flavor, Lindt dark chocolate is made with sustainably sourced cocoa beans and is certified by UTZ, an organization that promotes sustainable farming practices. This means that not only is Lindt’s dark chocolate delicious, it’s also good for the planet.
Overall, the benefits of dark chocolate are numerous and well-documented. Whether you’re looking to improve your heart health, boost your brain power, or simply enjoy a tasty treat, Lindt’s dark chocolate is a great choice.
Together with stable diffusion models, I can put in a header image that will get a few clicks. Try out the following prompts:
- A mountain landscape of dark chocolate (hm, kind of plain).
- Yeti with a bar of dark chocolate on a mountain backdrop (kind of cool)
- Hunters chasing a yeti across mountains for its dark chocolate hoard. (wow, can we get a commercial for that? Give it a few years, maybe an AI model can generate it)
The problem with generative models
The difficult thing about all AI/ML models is that they are focused on one thing — the objective function. For ChatGPT, it’s provide me the best possible answer for your prompt. For Stable Diffusion, it’s the best image for your prompt. It does this without regard for copyright, plagiarism, cheating, honor codes, and labor issues. It’s a tool which does its thing incredibly well but with no remorse.
Another problem is that it is rarely cognizant of what it does not know. We humans hedge a lot if we are unccertain. The threshold is different for each individual, but generally, a tiny bit of uncertainty will hold back most people. Plus, other people notice it too! Body language and vocal cues give away con-artist-wannabees. But for AI? If left without a discount factor for uncertainty, it will double down on what it knows to be true. Because it’s the objective function. Without body language and vocal cues, us humans will have a hard time distinguishing fact or fiction. That is, if we let it.
What to about it?
While I agree about OpenAI and Stability AI’s democratization of AI, I think it’s not enough. These are powerful tools that can upend whole industries, and thinking that their role ends in just sharing the end-products — the tools — is still self-serving. After all, that’s big money, and if left at that, then it’s all a facade.
GPDR is a good framework to think about the issue. There are the rights to access and that means that not only the end-products are shared, but also the nature of the training data, if not the training data themselves. Artists have been crying out art theft, and for good reason. Seeing their art as humans and copying them takes a hefty amount of work. That’s why it’s still tractable by DMCA notices. There’s a big quantitative difference with churning out thousands of copied material in a day!
There are also the rights to be informed and the right to erasure. It should be easy for people to opt-out data collection. This isn’t well understood, especially when we click yes to data privacy notices, but it is there.
So that’s a bit of the legal side. From practitioners, it might be wise to also put safeguards in the AI/ML products. Besides laying down ‘parental-educator controls’ for learners to prevent cheating, I can think of modifications to the algorithm itself.
An example is adding significant discount factors when the model is unsure of the facts. It should be honest in the face of uncertainty. Note though that I’m still trying to get ChatGPT to invent things. I did a prompt on non-existent facts about vitamins and it turned me down!
Another example is adding hidden pitfalls in the generated content. If it’s visual, adding silly watermarks like 6 fingers can give it away. If it generates an essay or a feature article, there could be subtle errors that give away the misdemeanor. It could be grammatical mistakes or off-tangent statements. I heard in my college days that someone just wrote the lyrics of a song in the middle of their term paper to pad their outputs. Something less silly could probably suffice.
There’s only so much individuals can do though. The rise of generative models will give incentive for large corporations to innovate, integrate, and consolidate large chunks of the AI landscape. While that may sound good on the balance sheets, it’s hardly good news for some industries. This issue must be tackled on a societal level.