Skip to main content

The Ethics of AI-Powered Applications

                     PREVIOUS CHAPTER


Chapter 12 | The Ethics of AI-Powered Applications


The Ethics

of AI-Powered

Applications





Why do we need to talk about ethics in the context  of AI-powered applica- tions? Isn’t it just like any other  piece of software  or  any other  machine? Aren’t the rules that are already in place enough to cover us? Is this about machines taking over the world!?



Let’s start with the last question. No, this is not about machines taking over the world. There are, undoubtedly, hugely interesting ethical considerations that we will have to tackle if and when we get to the point of having machines that  act autonomously with artificial general intelligence. However, as we already discussed in Chapter 2, there is no immediate worry of that happen- ing and even if it somehow did happen, this is not the book that is going to tackle the ethical concerns raised. The issues we are concerned  with here are not  around  machines taking over  the  world. We  are concerned  with machines behaving in ways that are not safe, where their actions cannot be explained, and that lead to people being treated  unfairly without any way to rectify that.


AI-powered applications merit specific consideration because software that performs automated decision-making is materially different from other types of software. As we discussed in Chapter 3, we are dealing with software that has a certain level of self-direction in how it achieves its goals and potentially autonomy in what goals it generates. Most other  software is passive, waiting for us to manipulate it in order for things to happen.

Furthermore,  the level of complexity of AI software means that we need to explicitly consider how we will provide explanations for decisions and build those  processes  in the software itself. At this level of complexity, the path that led to a specific decision can easily be lost. This is especially true of data- driven AI techniques, where we are dealing with hundreds of thousands or millions of intermediate  decision points (e.g., neurons  in an artificial neural network) all leading to a single outcome.

Therefore, precisely because AI-powered software is not like other software, we have to  explicitly address  ethical considerations,  how  they can weave themselves into software, and how we uncover and deal with them. With AI we are not programming specific rules and outcomes. Instead we are develop- ing software with the capability to infer, and based on that inference make choices. Put simplistically, it is software that writes software. We, as the cre- ators, are a step away from the final outcome. Since we are not programming the final outcome, we need to build safeguards to ensure it will be a desirable one.


The Consequences of Automated
Decision-making
All of that introductory reasoning may have felt a bit abstract, so let’s try and make it more real with a practical example.

A subject  that,  thankfully, is being discussed increasingly more  frequently within technology circles is how to address the huge inequalities that exist within the workplace. Gender, religion, ethnicity and socio-economic status all impact what job you are able to get and how you are treated  and compen- sated once you do get it. The ways this happens are varied, with some being very explicit and some more subtle.

Here is an example of a very explicit type of discrimination that was recounted to me by an engineer living in Paris. He explained how a friend asked the favor to use his address in job applications. When asked why, the friend explained that if he used his own address the job application stood a higher chance of being rejected. The friend lived in a suburb that was considered poor and rife with crime. It turns out that recruiters used the postcode as a signal to deter- mine the applicant’s socio-economic status.
The AI-Powered Workplace


163



Now, assume that  those  same companies decide that  they should build an automated AI-powered tool to help do an initial sift through job applications. As we discussed in previous chapters, the way to do it would be to collect examples of job applications from the past that met the “criteria” and exam- ples of job applications that did not. The AI team will feed all the data into a machine learning algorithm and that algorithm will adjust its weights so as to get the “right” answer. While individual members of the team preparing the tool  are not  necessarily biased, or looking to  codify bias, they will end up introducing bias because the data itself is biased.

The algorithm will eventually latch on to the fact that postcodes carry some weight in decision-making. These algorithms are, after all, explicitly  designed to look for features that will enable them to differentiate between different types of data. Somewhere in a neural network, values will be adjusted so that postcodes  from economically disadvantaged areas negatively affect the out- come of the application. The bias and inequality have now been codified—not because someone explicitly said it should be so, but because the past behav- iors of human beings were  used to  inform the  discovery of a data-driven AI-based reasoning model.

This hypothetical scenario became a very real one for Amazon in 2018. The machine learning team at Amazon had been working on building tools to help with recruitment since 2014. The CV selection tool was using 10 years’ worth of data and the team realized that it favored men over women. The algorithm simply codified what it saw in data. The overwhelming proportion  of engi- neers was male. “Gender must play a role,” the algorithm deduced. It penal- ized resumes that included the word “women’s,” such as “women’s chess club captain.” It also downgraded graduates of two all-women’s colleges.1 Even if the program could be corrected to compensate for these particular instances, Amazon was too  concerned  that they would not be able to identify all the ways in which the predictions may be influenced.

You can imagine in how many different scenarios similar biases can be intro- duced. Using past data to inform decisions about whether  someone should get a mortgage or not, what type of health insurance coverage one should have, whether one gets approved for a business loan, or a visa application, and the list goes on. In the workplace, what are the consequences of automating end-of-year  bonuses   calculations  or   how  remuneration   is  awarded   in general?

Even seemingly less innocuous things can end up codifying and amplifying pre- existing patterns  of discrimination. In 2017, a video went viral that showed how an automated soap dispenser in a hotel bathroom only worked for lighter



1 www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret- ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G.
164   Chapter 12 | The Ethics of AI-Powered Applications



skin tones.2  The soap dispenser used near-infrared technology to detect hand motion. Since darker skin tones absorb more light, the dispenser didn’t work for them. Not enough light was reflected back to activate the soap dispenser. It was not the intention of the designer of the soap dispenser to racially dis- criminate. But the model of behavior they encoded for this relatively simple decision did not take into account darker skin tone. It was a faulty model, and at no point from inception to actual installation in bathrooms was the consid- eration made about whether it would work for all skin tones even though it depended on the hand’s ability to reflect light.3 Now, assume you’ve just made a significant investment in your own organization to improve the workplace, one that included an upgrade of all the physical facilities. To great fanfare the new working space is inaugurated;  big words are uttered about inclusion, well- being, and so forth. Colleagues with darker skin tones then realize that the bathrooms  will not work for  them. Even if people decide to approach this lightly and not feel excluded, that sense of exclusion at some level is inevita- ble. It reminds them of the wider injustices in everyday life and the lack of diversity and consideration of diversity.

Automated decision-making  will encode the bias that is in your data and the diversity that is in your teams. If there is a lot of bias and very little diversity, that will eventually come through in one form or another. As such, you need to  explicitly consider these  issues. In addition, you need to  consider them while appreciating that the solution is not just technological. The solution, as with so many other things, is about tools, processes, and people.

In the next section we will explore some guidelines we can refer to in order to avoid some of these issues.


Guidelines for Ethical AI systems
In order to avoid scenarios such as the ones described previously, we need to ensure that the systems that we build meet certain basic requirements  and follow specific guidelines.

The first step is the hardest but the simplest. We need to recognize that this is an issue. We need to accept that automated decisions-making systems can encode biases and that it is our responsibility to attempt to counter that bias. In addition, we also have to accept that if we cannot eliminate bias, perhaps the only solution is to eliminate the system itself.


2 www.mic.com/articles/124899/the-reason-this-racist-soap-dispenser-doesn-t- work-on-black-skin.
3 This, by the way, is also why diversity can help teams design better  products. Clearly, at no point from inception to installation of this soap dispenser did a dark-skinned person interact with it. There was likely nobody in the team that designed this to pick up on the potential issue.
The AI-Powered Workplace


165



That last statement is actually a particularly hard statement for a technologist like me to make. I am an optimist and strongly believe that we need technol- ogy in order to overcome some of the huge challenges we are faced with. At the same time, I have to accept that we have reached a level of technological progress that is perhaps out of step with our ability to ensure that technology is safe and fair. In such cases, as much as it feels like a step backward, we have to  consider delaying introducing certain technological solutions. Unless we have a high degree of confidence that some basic guidelines are met, that may be our only choice.

Between 2018 and 2019 the European Union tasked a high-level expert group on artificial intelligence with the mission of producing Ethics Guidelines for Trustworthy AI.4 The resulting framework produced is a viable starting point for anyone considering the introduction of automation in the workplace. We will provide a brief overview of the results here, but it is worth delving into the complete document as well.


Trustworthy AI
Trustworthiness is considered the overarching ambition of these guidelines. In order  for AI technologies to really grow, they need to be considered trust- worthy and the systems that underpin the monitoring and regulation of AI technologies need to be trusted as well. We already have models of how this can work. It is enough to  think of the  aviation industry—there  is a well- defined set of actors from the manufacturers to the aviation companies, air- ports, aviation authorities, and so on, backed up by a solid set of rules and regulations. The entire system is designed to ensure that people trust flying. We need to understand, as a society, how we want the analogous AI system to be.

For trustworthy  systems to exist, the EU expert group identified three pillars. AI should be lawful, ethical, and robust. We look at each in turn.


Lawful AI

First, AI should be lawful. Whatever  automated  decision-making process is taking place we should ensure that it complies with all relevant legal require- ments. This should go without saying. Adherence to laws and regulations is the minimum entry requirement. What specifically needs to be considered is what processes are in place to achieve this. Companies in industries that are not heavily regulated may not be accustomed to questioning the legality of the technical processes they use.


4 https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines- trustworthy-ai.
166   Chapter 12 | The Ethics of AI-Powered Applications



Ethical AI

Second, AI should be ethical.  Ethics is, clearly, a very complex subject and one that cannot be entirely reduced to a set of specific concepts. The expert group grounded their approach to identifying a set of principles for ethical AI through the recognition of some fundamental rights, as set in EU Treaties and international human rights law. These are

Respect for human dignity:  Every human  being has  an intrinsic worth that should be respected  and should not be  diminished, compromised,  or  repressed  by others, including automated systems. Humans are not objects to be manipulated, exploited, sifted, and sorted.  They have an identity and cultural and physical needs that should be acknowledged, respected, and served.

Freedom of the individual: Humans should be free to make decisions for themselves. AI systems should not be look- ing  to   manipulate,  deceive,  coerce,   or   unjustifiably survey.

Respect for democracy,  justice, and the rule of law: In the same way that AI systems should respect individual free- dom, they need to respect societal processes that govern how  we  manage ourselves.  For  example, AI systems should not  act in a way that  undermines our ability to have trust in democratic processes and voting systems.

Equality,   nondiscrimination,   and  solidarity:   This  speaks directly to the need for AI systems to avoid bias. Society, in general, has been  woefully inadequate in addressing these issues. It is enough to look at something like equal pay for men and women to admit that such issues cannot be  resolved  simply by saying people  should  act  with respect for each other  and lawfully. As such, it is impor- tant that we reiterate  the point of equality and solidarity, in addition to the ones mentioned before.

Citizens’ rights: In this book we focused on how AI systems can improve life in the workplace. Similarly, AI systems can improve life for all of us as citizens, as we go about interacting with government administration at various levels. Equally, however,  AI systems  can  make  those interactions  opaque and difficult. Specific consideration needs to be paid to ensure that that does not happen.
The AI-Powered Workplace


167



Building on these rights, they went on to define four ethical principles, namely:

1. Respect for human autonomy: AI systems should not look to  manipulate humans  in any way that  reduces  their autonomy. Some aspects, such as an automated  system forcing a human being to do something, are “easier” to identify. Things become more  challenging when we are designing systems that  more  subtly influence behavior. Are the goals and purposes of our system transparent, or is it trying to manipulate behavior in order  to achieve a goal that is not clearly stated upfront?

2. Prevention of harm: Harm in this context  is not referring simply to physical harm, which might be easier to pinpoint and justify. This is also referring to mental harm and both individual and  collective  societal  harm.  For  example, some AI tools  can be incredibly resource  hungry. The amount  of calculations required  means that  significant amounts of energy are expended5. If you were to develop an AI-powered tool that required an inordinate amount of energy, are  you considering that  cost  (that  is less obvious) as something that is causing harm? It is not that different from considering what your organization does with respect to energy efficiency in general, and whether that is not only a financially sound thing to do but also an ethical principle of not causing harm to the environment. Obviously, none of these  questions have easy answers. The  first  step  is to  consider  them  and  have  honest discussions about what can be done.

3. Fairness: There are no simple answers or a single definition of fairness. Basing our  thinking on  the  rights defined earlier, however, we can say that fairness should be a core principle at the  heart  of the  design on any AI system, since lack of fairness would, at the  very least, lead to discrimination. We could also take it a step further and say that  AI systems should try to  improve fairness and actively work to  avoid deceiving people or impair their freedom of choice.






5 There  is an increasing recognition  of how  energy hungry the  entire  IT industry is. According to research by the Swedish Royal Institute of Technology the internet uses 10% of the world’s electricity. AI techniques only exacerbate energy demands.
168   Chapter 12 | The Ethics of AI-Powered Applications



4. Explicability:  If  a  decision  of  an  automated   system  is challenged, can we explain why that decision was made? This goes  right to  the  heart  of the  matter.  Without explicability, decisions cannot be challenged and trust will very quickly erode.  Is it enough to  say that  the reason someone  was denied a vacation request  or  a pay rise is because a system trained using data from past years decided that it was not an appropriate course of action, without being able to specifically point to the elements relevant to  that  person’s situation that  contributed  to that decision?

It is understandable if the sum of all these issues seems like an insurmountable mountain to climb. Do we really need to go into the depths of ethical debates if all we want to build is a more intelligent meeting scheduler for our com- pany? My personal opinion is that we do need to, at the very least, consider the issues. We need to shift our thinking away from considering ethical con- siderations as a burden or an overhead.

This is about  building workplaces that  are fairer and more  inclusive. Such workplaces also tend to lead to better  outputs from everyone, which means a better overall result for an organization. This is not about fairness and inclu- sivity being better  for  the bottom  line of the company though. It is about whether you consider it a better  way to be and act in society.

The more aware we are of the issues and the more questions we pose, the less likely we are to  build systems that  deal with people unfairly. Even an innocuous meeting scheduler has the capacity to discriminate. It might not take into account the needs of parents or people with disabilities, by consis- tently  scheduling meetings  at  9  a.m. or  scheduling consecutive  meetings in locations that are hard to get to.

There are no easy answers to these questions, and there is constant tension between the different principles. The EU expert group on AI set out a num- ber of high-level requirements to help navigate this space, all leading to more robust AI.


Robust AI

Robust AI refers to our ability to build systems that are safe, secure, and reli- able. Let’s quickly review some of the key requirements to give a sense of the types of things that we should or could be concerning ourselves with.
The AI-Powered Workplace


169



Human agency and oversight: We discussed autonomy in Chapter 3 as the ability of a (software) agent to generate its own goals. The limitation on software agency is that it should not hamper the goals of a human, within appropri- ate context,  either  directly or  indirectly. Oversight, on the other  hand, refers to the ability for humans to influ- ence, intervene, and monitor an automated system.

Technical robustness  and safety: Planning for  when things go wrong and being able to recover or fail gracefully  is a key component  of any sufficiently complex system, and AI-powered applications are no different. They should be secure, resilient to attacks, and fallback plans should be in place for when things go wrong. In addition, they should be reliable, accurate, and their behavior should be repro- ducible. Just like any solid engineering system, you need to be able to rely on it to behave consistently.

Privacy   and  data  governance:   In  this  post-Cambridge Analytica6 world we are all hopefully far more aware of how important robust privacy and data governance are. Because of the  reliance of AI capabilities on access to data, it is also a hotly contested issue of debate. With the release of the  GDPR regulations in Europe, many said that this would sound the death knell on AI research in the continent.  Such regulations hamper access to  data, which in turn reduces the speed with which AI research can be done and the final performance of those systems. At the same time, it was heartening to see voices from within large tech companies (e.g., Microsoft’s CEO Satya Nadella7) accept that GDPR is ultimately a positive thing and invite wider scrutiny. Most recently, Facebook has been proactively asking governments to introduce more regulations (although not  everyone is convinced of the motivations behind that). Overall, I  think more  people are beginning to appreciate that governance is required at all levels, and lack of it will lead to a potentially too strong backlash against technology—a backlash that may prove far more  costly than  having to  adhere  to  regulations upfront.



6 www.theguardian.com/uk-news/2019/mar/17/cambridge-analytica-year-on-lesson- in-institutional-failure-christopher-wylie.
7 www.weforum.org/agenda/2019/01/privacy-is-a-human-right-we-need-a- gdpr-for-the-world-microsoft-ceo/.
170   Chapter 12 | The Ethics of AI-Powered Applications



Accountability   for  societal and  environmental   well-being: Society is coming to the realization that everything that we do has an impact that  is not  directly visible in our profit and loss statements,  and that we carry an ethical responsibility to consider that. In particular, the societal and environmental impact of the systems that we build should no longer be dismissed, and the responsibility for it  cannot  be  offloaded  somewhere  else.  That  is one aspect of being accountable, with the other being a much more formal way of tracing accountability and putting in place ways to audit AI-powered applications.

Ethical AI Applications
To build ethical AI applications,  the rights, principles, and requirements previ- ously described need to  be supported  with specific techniques. There  is a burgeoning community of researchers and practitioners who are working spe- cifically in this direction.

From a technical perspective there  is research  toward  explainable AI, and methods are being considered to help us marshal the behavior of the immense reasoning machines and neural networks  that we are building. There is also much needed interdisciplinary work to get technologists talking more closely with other  professions. It’s only through a more well-rounded approach that considers all the different dimensions of human existence that we will be able to move forward more confidently.

From a societal perspective, governments (and we as citizens) have to look for the necessary structures  to put in place in order to support trustworthy AI. We  will need  appropriate  governance frameworks, regulatory  bodies, standardization, certification, codes of conduct, and educational programs.

As we think about how to introduce AI in our workplace, we also play a role and carry a responsibility in this context.  The first step is about educating ourselves and becoming more aware of the issues. The second step is about building these  considerations  into our  process  and allowing discussions to take place.

It is not an easy process, and it does require specific effort. However, this is the time for us start working toward a future where the impact of the tech- nologies we develop is much more carefully considered. If we do not invest the necessary effort in building trustworthy  AI, we risk having to deal with the far more serious aftermath of disillusioned societies and people. The work- place is a large component  of our lives. We, hopefully, do not want to build workplaces where people feel surveilled and controlled.
The AI-Powered Workplace


171



Technology can be liberating as much as it can be disempowering. It can create a more fair and equitable society, but it can also consolidate and amplify injus- tice. We are already seeing how people feel marginalized by the introduction of wide-scale automation in manufacturing. The broad application of artificial intelligence techniques in every aspect of our lives will be transformative. It is up to us to ensure that that transformation is positive.

An AI-powered workplace can be a happier, more  positive, more  inclusive, and more equitable workspace. We will not get there, however, without care- fully considering the ethical implications of what we are doing. There is no free lunch, even in a fully automated office. We need to put in the extra time and resources required to ensure that we build a better  workplace for today and  contribute   to   a  better   society  and  a  healthier  environment   for tomorrow.


CHAPTER 1
CHAPTER 2
CHAPTER 3
CHAPTER 4
CHAPTER 5
CHAPTER 6
CHAPTER 7
CHAPTER 8
CHAPTER 9
CHAPTER 10
CHAPTER 11

Comments

Popular posts from this blog

How to disable encryption ? FBE ROM 1st time flashing guide with FBE encryption through Orange Fox Recovery.

 It is now mandatory to format your data for the first time when you flash this FBE ROM. ***Backup everything from phone Internal storege to PC/Laptop/OTG/SD card.

What is BLACK WINDOWS 10 V2 windows based penetration testing os,and what are its features.Download link inside

                         Black Windows 10 V2