Researchers Are Excited About The AI
Open AI has unveiled GPT-4, a new version of its large language model that powers Chat GPT. GPT-4 can handle images as well as text, and has shown the ability to create human-like text, generate images and computer code from prompts. However, the technology is only available to paid subscribers to ChatGPT, and there is frustration in the science community over OpenAI's secrecy about how and what data the model was trained, and how it actually works. Researchers say that this makes it less useful for research and raises concerns about its safety. OpenAI has improved safety in GPT-4, but it still has the potential to output false information, known as "hallucinating." Researchers argue that without access to the data used for training, it is impossible to improve the model or remedy any bias that might have originated. Scientists are calling for an urgent need to develop guidelines to govern how AI and tools such as GPT-4 are used and developed, and to ensure that any legislation around AI technologies keeps up with the pace of development.
Open AI, for its part, says it is working to make its model more transparent, and is sharing more information than ever before about how it has been developed. Sam Bowman, director of research at OpenAI, said the company is “striving towards transparency” and that he personally would like to see the model be released in an open-source format at some point.
Bowman says that while Open AI is committed to openness and transparency, the company also has to balance the need for safety and security with the scientific community’s desire for access. He says that the company’s priority is to ensure that the technology is used responsibly.
“We want to make sure that the technology we’re building doesn’t have unintended consequences, that it’s not going to be used to cause harm,” Bowman said. “We want to make sure that people who are using the technology are going to use it in ways that are safe and beneficial.”
OpenAI has previously been criticized for not being transparent enough about its models. In 2019, the company said it would not release the full version of its GPT-2 model because of concerns that it could be used for malicious purposes. However, it later changed course and released the model in stages.
Open AI’s release of GPT-4 has once again raised concerns about the safety and ethics of AI technology, particularly large language models. Some experts argue that these models are too powerful and could be used for nefarious purposes, such as generating fake news or deepfakes. Others worry that the technology could be used to automate jobs and lead to widespread unemployment.
Despite these concerns, there is no doubt that GPT-4 represents a major breakthrough in AI technology. Its ability to handle images and generate code from prompts has the potential to transform a wide range of fields, from science and engineering to art and design. However, it remains to be seen how Open AI will balance the need for safety and security with the scientific community’s desire for access to this powerful technology.
As GPT-4 becomes more widely available, it will be important for researchers and policymakers to work together to ensure that the technology is used responsibly and for the public good. This will require a careful balance between openness and transparency on the one hand, and safety and security on the other.
In the meantime, scientists and researchers are continuing to explore the potential of GPT-4 and other large language models and are pushing for greater transparency and access to the underlying code and data used to train these models. Only through collaboration and dialogue can we hope to ensure that these powerful technologies are used to benefit society as a whole, rather than being used for harmful or unethical purposes.
There are no comments for this story
Be the first to respond and start the conversation.