ChatGPT by OpenAI is as good as the hype suggests. And then some. The search landscape has just been radically altered by this disruptive entrant.
While Google has done a better job lately in introducing dropdown questions somewhat similar to a search query, in a string of questions that one can click on to find a more immediate answer scraped from a website, these answers often lack context and don't roundly provide an adequate answer. One often has to go to the site where a clip is being snipped from to best understand what is being presented as the answer. Time-consuming.
With OpenAI's product, you simply input the query into the text input area and you have a tightly-constructed, economically-worded, and deeply impressive answer that more adequately answers the question at hand. For this, you also didn't have to wade through the bog of the first four listings being advertisements, with the regular possibility of these advertisements representing phishing attacks. The patience has worn thin years ago by those who require regular use of a search engine and technologists are looking for any sort of more favorable alternative, of which this is a huge improvement.
When you search for a particular keyword and see the number one result, you really aren't seeing the best website for that particular keyword string, rather, you're seeing the webpage that was able to best massage the search engine optimization framing to be awarded the keyword traffic. Sometimes the domain's page ranking, being high, does denote quality of a more established brand than a fly-by-night blog or unverifiable, untrustworthy source, but, as anyone knows who does an abundance of searches, this only broadly works. And when you're doing a search, you're trying to laser down the most pertinent results for your exact query. Even with adding search parameters that allow for the use of negative keywords or quotations to find an exact phrasing, it's not an altogether helpful tool to find what you're looking for if you're seeking to do so swiftly.
Open AI's product does a more impressive job of answer retrieval. Even for their most robust engine to answer sophisticated questions, you can do several hours worth of queries for less than a dollar. For their least sophisticated engine, you could do 2500 queries for a single dollar. This amounts to a tremendous value that allows for a clean interface that doesn't read your emails for potential advertising suitability, cloud your search results with distracting advertisements, or collect your search data for advertising analytics. The future of search is in tandem with some sort of AI component that meets the searcher where they are at and creates a conversational dynamic more akin to having a conversation with a subject matter expert in collectively all of the world's disciplines.
It's not only the retrieval of information, but also the composition competency that is collectively blowing peoples minds. Article writing is time-consuming, laborious, tedious, and potentially mind-numbing. By defining a desired article output, through specific parameters, writers can better dig deep on what they are actually trying to say, rather than the semantics of how they say it.
Couple that with the fact that you can define, in English, what coding you would like done, and then actually debug this coding, with a compiler, to mimic having a junior coder on call, essentially makes this a dynamic no-code platform. it can robustly compile applications based on your defined specifications. That's a historically expensive proposition made to be virtually free. As someone who bills for coding, I can tell you that it's not cheap and if you input the correct commands, you can have the output of code that would take a junior coder at least an hour or two write and debug. That's a tremendous cost savings and powerful ability to further the experimentation of your business, without having to absorb costs or slow down an active brainstorming session to accommodate the building of a functional wireframe or application prototype.
Further, OpenAI actually mimics an operating system like Ubuntu, allowing you to create folders and add features to a code library that it as already produced to your bespoke specifications.
The founder of Open AI, Sam Altman, sat down with an interview with the founder of LinkedIn, Reid Hoffman
Cleverly, one of the commenters put the entire video transcript into Chat GPT: "Large language models will create opportunities for new businesses to challenge Google in search products and provide human-level chatbot interfaces for medical and educational services. Multimodal models and agents using language interfaces to complete tasks will also be significant trends, and successful businesses will focus on creating something unique and differentiated by fine-tuning existing large language models for specific purposes. Access to data will be key to success in the field of AI, and businesses will need to consider data privacy and security as well as how language models can be used to improve existing products and services, create new products and services, automate tasks, and provide personalized experiences for customers."
The future has arrived.
In my large lecture classes, I often make "power-up" assignments. These are graded pass/fail (since we might have 300 submissions) and usually involve watching a documentary with a friend or family member, discussing it briefly, and then writing a short reflection (1-2 pages) about the film and the issues discussed. Students have many options to choose from, so they can choose which subjects they like best. We use TurnItIn to catch those just uploading another's work or using something they found on the web.
One of the options is to watch "Coded Bias" about the use of AI and facial recognition technology (we have a day or two in class where we talk about AI, its use in CRM systems, etc.).
After turning in grades yesterday, I asked GPTChat to: 'Write a reflection paper about the film Coded Bias' and 'Explain the problems with training data and facial recognition.' Below is the result. It is not very creative but does summarize the information well.
This result shows no plagiarism in Turnitin (granted, it might eventually when more students use this tool). If a student used this tool and added a few words about watching it with their mother, etc., then the TAs and I would give such writing a "pass" if we had seen it this semester. I plan to have a power-up assignment early in the semester for all students to use the same prompt and then see if we can get TurnItIn trained on this pattern or writing. That will likely not work since Turnitin is not about style but just patterns of words.
Scott
-----
Coded Bias is a documentary film that explores the ways in which artificial intelligence and algorithms can perpetuate and amplify biases that exist in society. The film delves into the history of AI and how it has been used in various fields, such as policing and hiring, and the consequences that have resulted from this use.
One of the most striking aspects of the film is the way it illustrates the widespread and often unconscious biases that are present in the data that is used to train AI systems. This includes biases against certain racial and ethnic groups, as well as biases based on gender and other characteristics. The film shows how these biases can have serious consequences, such as the misidentification of innocent individuals as criminals or the unequal distribution of resources and opportunities.
The film also highlights the importance of accountability and transparency in the development and use of AI. It emphasizes the need for diverse representation in the fields of technology and AI, as well as the need for clear policies and regulations to ensure that AI is used ethically and fairly.
Overall, Coded Bias is a thought-provoking and eye-opening film that raises important questions about the role of AI in society and the potential consequences of its use. It serves as a powerful reminder of the need to be aware of and address biases in all aspects of our lives, including in the technology we use.
There are several problems with using training data to develop facial recognition systems:
Limited diversity: Many facial recognition systems are trained on data sets that are not representative of the entire population. This can lead to poorer performance for individuals who are not well represented in the data, such as people of color or those with disabilities.
Bias in the data: The data used to train facial recognition systems can contain biases that are inherent in the data collection process. For example, if the data is collected disproportionately from one racial or ethnic group, the resulting facial recognition system may be more accurate for that group and less accurate for others.
Data privacy: The use of facial recognition systems raises concerns about data privacy, as they rely on the collection and analysis of personal information. There is a risk that this information could be used for nefarious purposes or could be accessed by unauthorized parties.
Ethical considerations: The use of facial recognition technology raises ethical concerns, such as the potential for abuse of power or the erosion of privacy. It is important for the development and use of these systems to be guided by ethical principles and considerations.