The future of generative AI and its ethical implications

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and gain efficiencies by improving and scaling citizen developers. look now.


Generative AI is revolutionizing our experience of the Internet and the world around us. Global investment in AI has grown from $12.75 million in 2015 to $93.5 billion in 2021, and the market is expected to reach $422.37 billion by 2028.

While this perspective may give the impression that generative AI is the “silver bullet” for advancing our global society, it comes with an important footnote: the ethical implications are not yet well defined. . This is a serious problem that can inhibit continued growth and expansion.

What Does Generative AI Achieve?

Most generative AI use cases provide lower cost, higher value solutions. For example, generative antagonist networks (GANs) are particularly well suited to advance medical research and accelerate the discovery of new drugs.

It is also becoming clear that generative AI is the future of text, image and code generation. Tools like GPT-3 and DALLE-2 are already widely used in AI text and image generation. They have become so good at these tasks that it is almost impossible to distinguish human-created content from AI-generated content.

Event

Smart Security Summit

Learn about the critical role of AI and ML in cybersecurity and industry-specific case studies on December 8. Sign up for your free pass today.

Register now

The million dollar question: what are the ethical implications of this technology?

Generative AI technology is advancing so rapidly that it is already beyond our ability to imagine future risks. We must answer critical ethical questions globally if we hope to stay ahead of the game and see sustainable market growth over the long term.

First, it is important to briefly discuss how basic models such as GPT-3, DALLE-2 and related tools work. They are deep learning tools that essentially attempt to “outperform” other models by creating more realistic images, text, and speech. Then, labs like OpenAI and Midjourney train their AI on massive datasets from billions of users to produce better and more sophisticated results.

There are many interesting and positive applications for these tools. But we would be remiss as a society not to recognize the exploitability and legal gray areas that this technology exposes.

For example, two important issues are currently under debate:

Should a program be able to claim results for itself, even if its output is derived from many inputs?

Although there is no universal standard for this, the situation has arisen before in legal spheres. The US Patent and Trademark Office and the European Patent Office have rejected patent applications filed by AI developers “DABUS” (who are behind the Artificial Inventor Project) because the applications cited AI as the inventor. Both patent offices have ruled that non-human inventors are not eligible for legal recognition. However, South Africa and Australia have decided that AI can be recognized as an inventor on patent applications. Additionally, New York-based artist Kris Kashtanova recently received the first US copyright for creating a graphic novel with AI-generated artwork.

One side of the debate says that generative AI is essentially an instrument to be wielded by a human creator (like using Photoshop to create or modify an image). The other side says the rights should belong to the AI ​​and possibly its developers. It’s understandable that the developers who create the most successful AI models want the rights to create content. But it is very unlikely that this will succeed in the long term.

It is also important to note that these AI models are responsive. This means that models can only “react” or produce results based on what is given to them. Again, this puts control in the hands of humans. Even the models that need to be refined are still ultimately driven by the data that humans provide to them; therefore, the AI ​​cannot really be an original creator.

How do we deal with the ethics of deepfakes, intellectual property, and AI-generated works that imitate specific human creators?

People can easily find themselves the target of fake videos, explicit content, and AI-generated propaganda. This raises privacy and consent concerns. There is also an imminent possibility that people will be out of work once AI can create content in their style with or without their permission.

A final problem stems from the many instances where generative AI models consistently exhibit biases depending on the data sets they are trained on. This can further complicate ethical issues, as we need to consider that data used as training data is the intellectual property of someone else, someone who may or may not consent to their data being used for that purpose. .

Adequate laws have yet to be drafted to address these issues related to AI outcomes. Generally speaking, however, if it is established that AI is only a tool, it follows that systems cannot be responsible for the work they create. After all, if Photoshop is used to create a fake pornographic image of someone without their consent, we blame the creator, not the tool.

If we consider AI to be a tool, which seems the most logical, then we cannot attribute ethics directly to the model. Instead, we need to dig deeper into the claims about the tool and the people using it. This is where the real ethical debate lies.

For example, if AI can generate a credible thesis proposal for a student based on a few inputs, is it ethical for the student to pass it off as their own original work? If someone uses a person’s likeness in a database to create a video (malicious or benign), does the person whose likeness was used have a say in what is done with that creation?

These questions only scratch the surface of the possible ethical implications that we, as a society, must address to continue to advance and refine generative AI.

Despite moral debates, generative AI has a bright and limitless future

Currently, reuse of IT infrastructure is a growing trend that is fueling the generative AI market. This lowers barriers to entry and encourages faster and more widespread adoption of the technology. Because of this trend, we can expect more independent developers to come up with exciting new programs and platforms, especially when tools like GitHub Copilot and Builder.ai become available.

The field of machine learning is no longer exclusive. This means that more industries than ever can gain a competitive advantage by using AI to create better and more optimized workflows, analytics processes, and customer or employee support programs.

In addition to these advances, Gartner predicts that by 2025, at least 30% of all new drugs and materials discovered will come from generative AI models.

Finally, there is no doubt that content like stock images, text, and program coding will largely become AI-generated. In the same vein, misleading content will become harder to distinguish, so we can expect to see the development of new AI models to combat the spread of unethical or misleading content.

Generative AI is still in its infancy. There will be growing difficulties as the global community decides how to handle the ethical implications of technology capabilities. However, with such positive potential, there is no doubt that it will continue to revolutionize the way we use the internet.

Andrew Gershfeld is a partner at Flint Capital.

Grigory Sapunov is CTO of Inten.to.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Learn more about DataDecisionMakers

Leave a Comment