- Fecha de publicación
- març 2019
- Tecnología
- Artículo
Fundador & analista principal en Datamente.
Applied AI often involves data privacy. For example, personalized health treatment requires our medical records and drug purchases to be captured and analyzed; credit scoring needs access to our financial history to get a score; etc. Is this acceptable?
Are we willing to have our data processed in exchange of the returned value? There are good arguments on both sides, and that is something that, as a society, needs to be debated. This type of questions, and many others, will raise as the society adopts the technology and as complexity increases.
Trust is essential to human life. As human beings, we are programmed to seek for trust relationships. And this is how our society has been built. Trust is so intrinsic to human nature that we do not even need to think about it. We just know it works. In this context, it is easy to understand the importance of trust for the economy as well. Businesses cannot simply work without trust, and there can be no trust without ethics.
There are several ways to incentivize trustworthiness: morals, reputation, institutions and security. All of them combined adequately enable trust.Our global society has become so sophisticated that trust needs to be rethoughtagain. The old paradigms of trust are no longer applicable. Disruptive technologies, such as artificial intelligence, require to be trustworthy to be accepted by the society. Who is going to accept opaque algorithms making life-impacting decisions without understanding how they work? How to make AI algorithms fair and non-discriminatory?
At this point, I hope you already begin to glimpse the importance of ethics with respect to artificial intelligence. This is an emerging issue and it will become one of the notable trends in the field. Ethics can play a key role in the development of artificial intelligence. An ethical framework can facilitate a healthy development of the technology by establishing limits between what is acceptable and what is not in order to guarantee human wellbeing.
But what kind of ethics? Individual ethics? Collective ethics? Utilitarian ethics?
This articleis a reflection about the benefits and risks of the generalization of AI and is not meant to draw conclusions but to raise questions and create awareness about a responsible use of the technology.
What is Artificial Intelligence?
John McCarthy was one of the pioneers of artificial intelligence and the one who coined this term. There are many definitions available out there and also a lot of misconceptions about the buzzword. So, I thought it was a good idea to paraphrase him to define AI.
AI is the branch of computer science that makes intelligent machines, especially intelligent software. And intelligence is the computational part of the ability to achieve goals in the wild. Research in AI has focused mainly on specific aspects of intelligence such as learning, perception, and language. As a consequence, machine learning, deep learning, computer vision, speech recognition, natural language processing and so on have experimented huge developments lately.
State of the art applications are collectively known as weak AI. Do not get misled by the name. Current AI applications are very powerful and can have serious implications. The reason why they are called weak is because contemporary intelligent machines are purpose-built. Deep Blue could beat the best chess player in the world, but it could not play the simplest card game, for example. Its intelligence could not be applied to a different task or problem.
Conversely, strong AI or artificial general intelligence (AGI) consists of the capability of an intelligent system to find a solution for an unseen task. This is a feature of human intelligence. Intelligent systems can perform better than us in many specific tasks, but they cannot generalize their intelligence to solve other problems.
Though AGI is still far from realization and the general consensus is that new breakthroughs and fundamental ideas are required to achieve parity with this aspect of human intelligence, the prospects for the evolution of the technology are enormous. Just as one of the industry’s gurus says,“AI is the new electricity”.
AI is not only powerful but it is also going to be ubiquitous and will profoundly change our society. But first we need to get it right so that such transformation is for the better. As we will see next, not everything is that brilliant.
Evil usage of AI
Technology is neutral. It is not good or evil in itself. What really matters is how we use it. AI is one of those technologies that is very polemic because of the huge impact it may have on any aspect of our lives. As an example, it can corrupt the very foundation of our information society. Freedom of information is a universal right recognized in international law and fake news is a direct attack to it. This is a much bigger problem than censorship. If we can no longer distinguish truth from fake, we are heading to a very dangerous situation.
But that is not the only serious concern. AI has already reached a point of sophistication that poses a serious threat to our society. Unluckily, there are many notorious cases available to back this assertion: deepfake, Brexit, US elections, lip-synced fake videos, and so on. Besides these infamous cases, other risks involved with a good use of technology such as self-driving vehicles open up many questions.
How to overcome this threat?
The governments and the industry must act quickly to guarantee that AI is used for the common good. The threat is enormous if we do not get AI right. But regulations are dangerous too. Too much regulation may refrain progress. AI is a very valuable technology that can bring a lot of prosperity to our society. We need to make it trustworthy, and ethics may play a paramount role in this sense. Our society needs to rethink ethics applicable to this field to ensure a responsible use of technology.
There are some collaborative initiatives in this area. EU is about to publish a code of conduct for responsible AI. But we need more. As AI plays an increasing social role with life-impacting outcomes, a global consensus will be very necessary. UN and other global institutions should take an active role too.
Ethics in AI
This is a constantly evolving field. As in previous occasions, fiction addressed this issue earlier than science. In 1950, Asimov published I, Robot,a book that included the three laws of robotics. The Asimov’s Laws, as they are also known, may be regarded as the first approach to ethics in artificial intelligence.
Asimov spent time testing the limits of his laws. His own work suggests that the three laws are not sufficient to address unexpected scenarios. Current AI systems already pose some moral dilemmas that are not present in the design of other systems.
Furthermore, the evolution of the technology will bring additional challenges such as the unpredictability of usage contexts for future general intelligent systems. AGI systems will no longer run in predictable and well-known use cases. How can we ensure safety in this context?
Machine ethics is a branch of AI ethics tasked with the moral behavior of intelligent machines. It should not be confused with roboethics nor computer ethics, which deal with humans.
It is not the first time that human beings have dealt with this kind of dilemmas. I trust that our society realizes what is on stake and works actively and collaboratively to set a framework that enables ethical AI. All relevant institutions should be involved and aligned towards a common goal.
Again, the intention of this reflection is to create awareness of the need to address these issues from the ethics perspective before such serious-impacting applications can be trusted.
Further reading:
- Blockchain & trust
- The danger of fake news
- Moral dilemmas of the driverless car
- Top 9 ethical issues in AI
- The ethics of artificial intelligence
"
También te puede interesar
Team building: qué es y cómo podemos implementarlo en una empresa
Los empleados son la estructura de cualquier empresa. Sin ellos, el negocio no saldría adelante, motivo suficiente para tener departamentos encargados tan solo de buscar estrategias para que los empl...
- Publicado por _ESIC Business & Marketing School
Qué es el modelo zero trust y cómo ayuda a mejorar la seguridad de tu empresa
La pandemia de covid-19 supuso un antes y un después en el funcionamiento de las compañías. La sociedad al completo se vio obligada a trabajar desde casa, lo que hizo que las empresas tuvieran que ...
- Publicado por _ESIC Business & Marketing School
Dirección estratégica: qué es, para qué sirve y ejemplos
¿Vale todo el mundo para dirigir una compañía? Ser ambicioso y querer ascender en una empresa es, por supuesto, algo positivo y gratificante. Pero ¿qué ocurre cuando muchas personas llegan al pun...
- Publicado por _ESIC Business & Marketing School