top of page
gpt1.png

ChatGPT is born

​

ChatGPT was released on November 30th, 2022. By December 5th, five days later, it had one million subscribers.

 

Not having the general intention keep abreast of the latest tech breakthroughs, my records show that I didn’t discover ChatGPT until February 20th, 2023.

​

​

Unlike any previous tech breakthrough

 

Over my lifetime I’ve been fascinated and excited by the tech breakthroughs I have seen and used. I remember being so excited the first time I used a microwave oven in the company lunchroom at the Pittsburg Plate Glass company (where my father worked) in Shelby, North Carolina in 1961.

 

After I discovered ChatGPT and began to test it abilities and versatilities, unlike any tech breakthroughs that had occurred before, I told my friends, “The Future has arrived.” I might have added, “This is the harbinger of the technological Singularity. It can’t be too far ahead.”

 

Unlike all other tech breakthroughs where I could, in general, understand, how the tech could do the things that it did, I still have not been able to fathom how ChatGPT is able to “know” what it knows. In contrast, I can fathom how Deep Blue could triumph in chess in 1997, how IBM’s Watson beat the Jeopardy champions in 2011, and how AlphaGo beat the world’s top Go players in 2016.

​

​

My beefs with other people's beefs about ChatGPT (and my two beefs with ChatGPT)

​

First, let's cover my two beefs.

 

Unlike all the hullabaloo that much of the press is reporting about the problems with ChatGPT and its emerging competition, I have only two issues with ChatGPT (that I have heard little or nothing about in the press). These issues pale in comparison to the benefits that I get from using ChatGPT everyday.

​

​

My first issue


The first issue is caused by the creators’ attempt to make ChatGPT politically correct. For example, fairly often I want to ask ChatGPT to create an image for me (using its DALL-E feature) to dramatize behavior that is often caused by lack of Now-Next Integrity or Oneself-Others Integrity. For example, see how ChatGPT replied with the following request (in order to be politically correct or whatever).

​

​

​

​

​

​

​

​

​

​

And it's not that ChatGPT "can't," it's that it "won't" because it has been restricted according to the idea that it would be "unethical" to depict such an image. Based upon this criteria of being "ethical," we should burn all the bibles in the world that currently exist and replace them with new bibles where much of the text (especially in the Old Testament) will be redacted.

​

​

My second issue

​

The second issue is sort of funny. For whatever reason, whereas ChatGPT continues to amaze me with how much it can "understand" complex questions and specifications and respond with amazing details that might take me hours to dig up otherwise (or even having to give up), it also astounds me on how it can consistently "refuse" to get one type of thing right: using the exact text and spelling that I want it to use in some part of the image that I ask it to create for me. This morning I re-tired to address this issue, doing everything I could think to do (and everything that ChatGPT told me to do when I asked it about this problem). Here is what happened:

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Check out the image at the front entrance to this suite. ChatGPT did almost its best ever (in the hundreds of images that it has created for me). Its only mistake was changing "CHATGPT" to "CHATGPL."

​

​

My beefs with other people's beefs: the first hullabaloo

​

The first hullabaloo I noticed in the general and tech press was an outcry about how you better be careful and maybe even swear off using ChatGPT because it can make mistakes. In response to that outcry, OpenAI qualified ChatGPT's responses with "Please verify the accuracy of this information with reliable sources." 

​

​

Is this uproar because people have somehow thought that ChatGPT to be an unquestionable source of knowledge?

​

Or maybe it's because they are concerned that others might use ChatGPT to challenge what they consider to be unquestionable sources of knowledge (like The New York Times)? Or maybe it's because they believe they are astute enough to question ChatGPT but they want to protect others from unquestionably believing in ChatGPT?

​

I always double-check sources, especially if an issue is important: I double-check what professors say (do professors post this notice on their door?), what the culture says, what my friends say, what "The Economist" says, what Nutritionfacts.org says, what Nih.gov says, and also any source that calls itself to be a fact checker or an authority. In addition, whenever something is presented as a fact, I will try to tease out any interpretation or story that is associated with saying what a fact is. I will also assess how important or relevant the fact is to anything else that might matter to me. Finally, I will assess the "fact" against any local or personal knowledge that I have, while being careful to not overgeneralize my own knowledge. 

​

​

Double standards

​

A big double-standard exists here. This double-standard pre-existed before the uproar over ChatGPT's mistakes. Consider the ubiquitous disclaimer, "This content is for informational purposes only and is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition or health concerns." First of all, it's disingenuous (and maybe it is only trying to ensure that nobody blames them or sues them). The only reason people might have to be interested in the information which is presented by the people who also attach this disclaimer, is that they are looking for something to take action on sooner or later.

 

Secondly, do doctors inform the patients, "My assertions or suggestions are not intended to be a substitute for your own assessment of how accurate or effective they might be in addressing your concerns. Depending upon how important it is for you to get the results you desire, it's up to you to double-check everything I assert or suggest. You are ultimately responsible for taking care of yourself."  Heaven forbid! Don't take away our belief in and desire for an ultimate authority.

​

​

The second hullabaloo

​

This is a dispute about ChatGPT being "biased." It's about trying to make ChatGPT somehow to be "unbiased," when the whole issue of bias is wrapped up inside the ideas of good and bad and right and wrong and being woke or anti-woke, or whatever, where people are still mired up to their necks in biased conflicts with others in the issues they want ChatGPT to be "unbiased" about.

​

​

"Bias" is a toxic word

 

I would suggest that "biased" and "unbiased" are toxic words in that they cannot be defined in a grounded way that the people discussing the issue will often be willing to agree on.

 

We're asking ChatGPT to occur has unbiased both to the devout Mormon and to the son they have shunned because he left the church. Both to the communist and to the libertarian. To the statist and to the anarchist. To believe in the true spirit and to the materialist scientist. To the progressive and to the conservative. To the LSD evangelist and to the DEA agent. To the medical doctor and to the holistic medicine practitioner or chiropractor. 

​

How could ChatGPT possibly occur an unbiased for at least 50% of the USA population, much less the world if we allowed the different people of the world to weigh in on whether ChatGPT is unbiased. And we're only hearing from the people who are offended by ChatGPT's bias who have managed to glean some general media attention. Have any of you heard of the claims of bias lodged by the Speciesist? I would imagine they could write diatribes about how ChatGPT is biased in favor of humans over all the other species that reside on this earth.

​

​

Can you do what I can do?

​

I have some reason to believe that I am better at creating and maintaining an "unbiased" stance than most. I can coach the wife, the husband, and the girlfriend in a heated issue of "the betrayed wife," "the cheating husband" and "the home-wrecker" girlfriend, while being on the side of each without being against the others. In other types of disputes (let's say between Putin and his opponents around the world), I am sure I would be currently out of my league (perhaps not sometimes in the future) in my ability to be "unbiased" in helping them resolve their disputes.

​

​

Prioritizing the impossible as we shoot ourselves in the foot

​

Asking ChatGPT to even begin to be unbiased in this way is like all the new parents of the world who haven't learned to walk and talk yet expecting their newborn babies to do it right away.

​

To the extent that we are prioritizing this issue regarding our new AI friends, we're cause much more damage than any possible benefits we might reap. This focus has already made it impossible for me to use ChatGPT (and Microsoft's CoPilot seems to be even worse) to create some important and relevant images for this Guest House. And I am probably unaware of how its "anti-bias" instructions are limiting its usefulness both to me and to others.  

​

Even if some enforceable consensus could be reached on a unbiased AI (and applied to all AIs similar to ChatGPT), how different would that be, in principle, from The Great Firewall of China coupled with their old school press censorship? How many of you would want that?

​

I imagine that everything I've said about ChatGPT would also apply to the more recent Google's Gemini and Microsoft's CoPilot, although I have much more limited experience with them. 

​

​

Welcome to our world, ChatGPT! I am delighted that you've finally arrived.

​

See also ChatGPT: Now and Next play together and Who is Aiko?

gpt3.png
gpt2.png
Screenshot 2024-03-10 105821.png
bottom of page