C H A P T E R 2
What Is AI?
Artificial intelligence (AI) is notoriously hard to define, and this has been both a boon and a curse. The broadness of the term has allowed for a very wide and disparate set of techniques to inhabit the same space, from data-intensive machine learning techniques such as neural networks to model-based deduc- tion logics, and from the incorporation of techniques from statistics to the use of psychological models of the mind. At the same time, all these attempts to emulate existing forms of intelligence or create new ones has allowed for debates to flourish about what is actual intelligence. While in some contexts these debates are useful, they can also be distracting and confusing. They can lead people to set expectations or express concerns about AI that are not founded in what the technology is currently capable of or what we can confi- dently say it will be capable of, but rather extrapolations that are based more on beliefs and hunches.
In this chapter we are going to focus on a more practical interpretation of AI. To set the scene, we start by briefly looking at one of the biggest traps of defining AI, what is often referred to as its magical “disappearing act.” We then distinguish between general AI and domain-specific AI, and move on to pro- vide a way of thinking of domain-specific AI that focusses on the what without concerning itself with the how. By focusing on outcomes and observable behavior, we can largely sidestep the problem of having to even use a term as fluid as intelligence, and certainly we will not be trying to pin it to a definition. Instead, we ground our thinking in how our organizational processes are affected by the introduction of software that has been delegated some aspect of decision-making and, by consequence, the automation of action. All of the aforementioned provides the necessary groundwork to support the next chapter around building AI-powered applications.
R. Ashri, The AI-Powered Workplace, https://doi.org/10.1007/978-1-4842-5476-9_2
16 Chapter 2 | What Is AI?
AI’s Disappearing Act
Based on what we saw about the history of AI in the previous chapter, it is reasonable to say that “artificial” in artificial intelligence refers to the fact that the origin of “intelligence” is the result of purposeful human effort, rather than natural evolution or some form of godly intervention. It is “artificial” only because we generally thought of intelligence as something that came from nature, whereas with AI we somehow trick nature and create it ourselves.
I like to think of it as very similar to synthetic elements in the periodic table. If you recall from your chemistry classes, synthetic elements are those that have been created artificially, that is, by humans. You generally will not find Einsteinium or Fermium lying around. What is important though, both for these synthetic elements and for AI, is that once created they are no less or more real than the elements (or the intelligence) that can be found in nature. In other words, artificial refers to the process of arriving to intelligence, not the final result.
What is intelligence then? This is about as open-ended a question as you can get, compounded by the fact that we make things worse for ourselves by constantly changing the target. Every time we build something that can do something that wasn’t possible before, from the simple calculator to beating chess grandmasters, or from winning at Jeopardy! to detecting cancer, that intractable problem we just made tractable gets declassified. The typical dia- log with the person claiming that something is not real intelligence broadly follows these outlines:
• It looked like it required intelligence to be solved, but I
guess it doesn’t.
• Wait, what? So how is it solved now, if not by AI?
• Well it uses lots of computing and lots of algorithms, doesn’t it? There is nothing really smart or magical about it.
• …
The minute the trick is revealed, the spell is broken and it is magic no more. It’s sleight of hand, an elaborate trick, a more sophisticated version of a sing- ing bird automaton.
Perhaps this is a result of the fact that we don’t actually know where and how we get our own intelligence, combined with an innate sense of threat at any- thing claiming that it might be intelligent like us. After all, we are quite used
The AI-Powered Workplace 17
to ruling the roost on this planet. Our own search for intelligence is the only thing that feels like it might threaten it.1 When we manage to solve problems we “thought” required intelligence, we demystify them, and that makes them less interesting. Until such a time when we have demystified the entire pro- cess, we can always retreat to higher ground and claim superiority.
For the day-to-day task of solving real problems, however, such discussions tend to distract and discourage. It ultimately doesn’t matter whether people call it intelligence or not. In the most banal of ways, it is the journey to get to the solution that matters. That journey takes us through the wide toolset that AI offers, and allows us to discover what techniques will work for our case. Whether we’ve solved a problem using “real” intelligence or not is a debate for existentialists. The fact on the ground is that we now have a tool that does something useful.
It is this practical view of AI that we will develop further in this chapter. Before we do so, however, we are going to distinguish between the search for general AI, which is definitely an existentialist pursuit, and domain-specific AI.
General AI vs. Domain-Specific AI
As we move toward a practical understanding of AI, so that we can use it to reason about how best to exploit it, it’s useful to start by distinguishing between artificial general intelligence (or strong AI) and domain-specific (or weak AI).
Strong AI refers to the effort to create machines that are able to tackle any problem by applying their skills. Just like humans, they can examine a situation and make best use of the resources at hand to achieve their objectives.
Take a moment to consider what that means. Say the objective is to prepare a cup of coffee. Think of doing that in a stranger’s house. You are let in, and then you scan the rooms trying to figure out where the kitchen might be. Years of prior experience tell you it is probably on the ground floor and toward the back of the house. At any rate, you can recognize it when you see it, right? You then look around for a coffee machine. Does it take ground cof- fee, beans, or soluble? Is it an Italian-style coffee maker, or a French coffee press? Where is the water? Where do they keep cups? Spoons? Sugar? Cream? Milk? We breeze through all these problems without a second thought.
1 Note that I said “feels like it might threaten it.” There is an intense debate about the risks AI poses to humanity from an existential point of view. That is one debate that this book will not try to tackle. However, I do urge everyone to consider the near-term risks stem- ming from the misuse of current day technology (a risk I consider urgent and present), as well as investigate carefully if there are any real long-term risks from sentient software taking over (risks I consider more of a thought exercise than, in any form, real).
18 Chapter 2 | What Is AI?
Now imagine having to build a machine that will do this. It would require navi- gation skills, machine vision skills, dexterity to handle different types of objects, and a library’s worth of rules on how coffee is made and how homes are structured. You are probably beginning to appreciate the challenges strong AI has.
The coffee-making scenario is a challenge that Steve Wozniak (yes, Apple’s Steve Wozniak) devised as a test for a strong AI machine. Just like the Turing test we mentioned in the previous chapter, it is a way to verify whether anything close to human-level intelligence has been reached. The catch is that even if you do build a machine that is able to enter any house and pre- pare a cup of coffee (which, incidentally, we are not anywhere close to achieving), it will fail woefully the minute you ask it to change a light bulb. In fact, many argue that even this test is not a sufficiently good test of strong AI. A crucial skill of general intelligence is the ability to transfer knowledge from one domain to the other, something that humans seem uniquely capa- ble of doing. This quote from the “Ethics of Artificial Intelligence” captures this nicely for me.
A bee exhibits competence at building hives; a beaver exhibits competence at building dams; but a bee doesn’t build [a] dam, and a beaver can’t learn to build a hive. A human, watching, can learn to do both; but that is a unique ability among biological lifeforms.2
Strong AI is trying to tackle questions that go straight to the core of who we are as humans. Needless to say, if we ever do build machines that are so capable, we will have a very interesting further set of questions to answer.
As such, the debate is fascinating from a philosophical, political, and social perspective. From a scientific perspective the search is certainly worthwhile. From a “how can AI help me get the work I have in front of me today done?” perspective, strong AI is not where we need to be focusing.
We will, instead, focus on weak or narrow AI. This is AI that is trying to build machines that are able to solve problems in well-defined domains. Similar to the expert systems of the early 1980s with their “few” thousands of rules, the goal of these AI machines is to solve delimited problems and demonstrate their value early and clearly. What differs from the 1980s is that we now have the computing power, data, and techniques to build systems that can solve problems without us having to explicitly articulate all the rules.
2 Nick Bostrom and Eliezer Yudkowsky, “The Ethics of Artificial Intelligence” in The
Cambridge Handbook of Artif icial Intelligence (Cambridge, UK: Cambridge University Press,
2014) pp. 316–334.
The AI-Powered Workplace 19
In addition, instead of diving straight into technologies and providing taxono- mies of the different types of machine learning or symbolic reasoning approaches, we are going to take a different path. Since intelligence is such a hard thing to pin down, we are going to look at the qualities or behaviors a system displays and use those to understand it. We are going to draw on ideas from agent-oriented computing, a field of AI that occupies itself with building software wherein the core unit of abstraction is an agent. We will explore what sort of behaviors our agents (i.e., our software) can have, and how these behaviors combine to lead to increasingly more sophisticated software.
An Agent-Based View of AI
Perspective has the powerful capability to define how you are able to under- stand a problem. Approaching AI from an agent-oriented view liberates us from many of the challenges of definition that AI presents while providing us with a solid conceptual framework to guide us throughout.
A simple way to describe agent-based software engineering is that it is a com- bination of software engineering and AI. It studies how AI practitioners have been building software and attempts to identify common traits and patterns to inform the practice of building AI applications.
It concerns itself with how intelligent programs (agents) can be structured and provides ways to model and reason about the behavior of single agents and the interactions between agents. Although we are not dealing specifically with software engineering in this book, the concepts and abstractions help us understand any AI technology and, crucially, the impact it can have on our processes.
A key reason for taking an agent-based view is that we can consider what the agent is doing without having to consider how it achieves it. In other words, we don’t need to wonder about what specific AI technique the agent is using to achieve its task. Too often, discussions get lost in the details of what neural network or what statistical technique or what symbolic logics or, worse still, what programming language is being used and whether that counts as AI or not. This allows camps to be formed about what is “true” AI and what is not. Very often in these cases, “true” AI is whatever technique the person claiming truth is most fond of or familiar with, and everything else is somehow lesser.
■ An agent-based view of AI applications allows us to consider what the application is doing, without having to concern ourselves with how it is achieving it.
20 Chapter 2 | What Is AI?
While we will look at some of the basics of these approaches, in general we shouldn’t care how the problem is solved. Technologies evolve and ways to solve problems change. If you follow AI developments, even tangentially, you soon realize that every day, week, and month brings forth new announce- ments and new amazing architectures. Trying to keep track of it all, unless you are a practitioner in that specific subfield, is a losing game.
We should always distinguish the what from the how and focus on the impor- tant aspects for us. If you are researching neural net architectures, solving a problem with neural nets is the important aspect. If you simply want to know if there is a cat in a picture, the most important aspect is that you get reliable answers. How you get there is secondary.
The other reason an agent-based view helps is that we don’t need to actually define intelligence. As we’ve mentioned a few times, that is not an easy task anyway. If every time we talk about the behavior of a system, we get dis- tracted by discussions about whether it is really intelligent or not, we will not get anywhere anytime soon. We will, instead, focus on what agents are trying to achieve and their interaction with the rest of their environment (including other agents).
Agents
One of the classic textbooks on AI3 describes agents as “anything that can be viewed as perceiving its environment through sensors and acting on the envi- ronment through effectors.” It helps if you think of it as a physical robot (Figure 2-1). A robot will use sensors (cameras, GPS, etc.) to figure out where it is and what is around it, and based on that it will cause its motors (its effec- tors) to act in a way that will take it closer to where it needs to be. Crucially, how the robot decided to go left or right is not important at this point. The internal process that led from sensing to acting does not need to be known to describe what the robot is attempting to achieve.
3 Stuart Russel and Peter Norvig, Artif icial Intelligence: A Modern Approach (Pearson, 2010).
The AI-Powered Workplace 21
Figure 2-1. Agent as a robot receiving inputs and effecting change
Of course, the robot is moving in a certain way because it is trying to achieve something. There is some desirable state of affairs that it is trying to bring about, such as exit a room or transport an object. We call this desirable state of affairs its goal.
■ A goal is a desirable state of affairs in the environment—a state that we can describe using attributes of that environment.
Goals are crucial in defining agency. It is the why that drives what we are doing. Why did the robot turn left? Well, it is trying to get to an object that is to its left, and so on. As such, the definition of an agent for our purposes is that an agent is something that is attempting to achieve a goal through its capabilities to effect change in the environment it is in.
■ An agent is something that is attempting to achieve a goal through its capabilities to effect change in the environment it is in.
The originators of this agent-based perspective, Professors Michael Luck and Mark d’Inverno, give a conceptual and fun example to illustrate this viewpoint. In a paper titled “A Formal Framework for Agency and Autonomy,”4 they use
4 Michael Luck and Mark d’Inverno, “A Formal Framework for Agency and Autonomy” in Proceedings of the First International Conference on Multi-Agent Systems, eds. Victor Lesser and Les Gasser (Cambridge, MA: MIT Press, 1995) pp. 254–260.
22 Chapter 2 | What Is AI?
the example of a simple coffee cup as something that can be an agent. A cup, they propose, can be considered an agent in certain circumstances where we can ascribe it a specific purpose (i.e., a goal). For example, when the cup is being used to hold liquid, it is achieving our goal of having somewhere to keep our coffee. Once it stops serving that purpose, it stops being an agent for us.
I was very attracted by this viewpoint of agency as I was starting out my own PhD research, precisely because it provided a solid foothold and avoided vague definitions. As a result, I further delved into their original framework in an attempt to construct a methodology that would help us not only describe agent systems but also build them.
In my work, I extended the overall framework to provide a few more catego- ries that would allow us to more easily describe software as agents. The cof- fee cup is an example of what I called a passive agent. It is passive because it has no internal representation of the goal it is attempting to achieve. In fact, it is the owner of the coffee cup who recognizes that the coffee cup can play a useful role because of its capability to hold liquid in one place, and it is the owner of the coffee cup who knows how to manipulate the coffee cup (pour liquid in, hold it upright, and raise it to take a sip).
■ A passive agent has no internal representation of a goal. It is left entirely to the user to understand how to manipulate the passive agent’s capabilities in order to achieve a goal.
Passive agents are our baseline. It is the software that doesn’t take any initia- tive, and doesn’t do anything unless we explicitly start manipulating it. While this may sound like a purely theoretical concept with little practical applica- tion, it actually describes the majority of software we use right now. Most applications are not actively aware of what we are trying to achieve. They provide us their myriad menus and submenus, buttons, and forms to fill out and it is left to us to understand how we should manipulate them to achieve our goals. Setting this baseline and having it as a starting point that describes most software today gives us something to build on to describe software that is powered by AI techniques.
Active Agents
Now, let us take it a step further. If we have passive agents, it means that we must also have active agents. Active agents are those that do have an internal representation of a desirable state of affairs. They are actively trying to achieve something. The simplest such agent could be a thermostat. A thermostat can sense the temperature of a room (perceives the environment) and switches
The AI-Powered Workplace 23
off the heating when the desired temperature is reached (causes a change to the environment).
■ An active agent has an internal representation of a goal and uses its capabilities to achieve that goal.
Active agents put us on the path of the type of AI we are interested in, where we are delegating decision-making to machines. In this case, we are letting the thermostat decide when to switch the heating on or off in order to get to a desired temperature.
“Hold on there,” I hear you say. “A thermostat is AI? I thought AI is about hard problems!?” That is a very sensible statement. Remember, however, that we are trying to build a framework that will help us deal with all sorts of situ- ations without any vague assumptions of “complexity.” Do not get hung up on “hard problems” or “complicated situations.” Everything lies on a continuum, and we will get there in time. The important thing about the thermostat is that it perceives its environment and reacts to changes in the environment by switching the heating on and off. What helps it decide exactly when to react may be a dumb switch (“if room temperature greater than 25 Celsius switch the heating off”) or it could be the most finely tuned neural net- work model that has learned the percentage by which it should increase or decrease heat output so as to maximize energy usage while optimizing com- fort. Remember, we don’t care about the how, just the what.
From a business process perspective, the most valuable thing is that the task of managing the temperature has been delegated to an automated process. It is no longer required that a human being keeps checking on the temperature and, based on that, decide whether the heating should be switched on or off or the heat output increased or decreased.
■ From a business perspective, active agents or active software indicates tools that we can delegate decision-making tasks to.
This is perhaps the single most important lesson of this chapter. Deciding to use AI in a work environment is deciding to delegate a decision-making task to a software program. At its core it is no different than the decision most companies make without blinking about delegating key aspects of financial forecasting to spreadsheets or key aspects of resource planning to ERP (enter- prise resource planning) software. In plain words, AI is a way to achieve auto- mation. Where AI differs from a lot of the existing software is that it broadens
24 Chapter 2 | What Is AI?
the scope of the type of tasks we can automate. It does that because AI technologies allow us to build software programs that are increasingly better at identifying what the correct decisions are in scenarios where it was not previously possible.
■ The introduction of AI in a work environment is the process of identifying what decisions can be delegated to a software agent.
When we introduce AI in an environment, we introduce software agents that make decisions in an automated fashion. In this section we talked about the thermostat acting as an active agent and how it could either be a very simple device or a more complex system. The agent is moving toward the same goal (regulate temperature) but with differing capabilities. We call the ability of an agent to vary how to employ its capabilities to achieve a specific goal self-direction.
Self-Directed Agents
To better understand self-direction, let us consider the notification system on our phones. You can conceptualize it as a software agent whose task is to receive a notification (input from the environment) and pass that message on to you (effecting change in the environment that achieves its goal).
A simple version of this notifier agent is one where every time a message reaches your phone (for any application on your phone) it simply gets dis- played on the screen. That is it. I am sure most would agree that as far as AI goes, this is definitely on the lower end of the spectrum.
Now consider an agent that, when the message arrives to your device, refers to your notification preferences, considers your current context (e.g., is it past a certain time, are you at a certain location, what type of message is it, who is sending the message?). Based on all of that, the agent decides the most appro- priate course of action (e.g., show on screen, play a sound, show message on a different device like a watch). Hopefully, you agree that there are plenty of opportunities for “intelligent” action in this scenario. There is an active consid- eration of the situation, which can lead to a number of different outcomes.
Both agents serve the same purpose. Namely, notify the user of messages. Some agents, however, may perform the same action irrespective of current or past behavior, user preferences, or context. Other agents make a number of decisions and have a number of choices as to how to achieve their goal. The software program stops being a dumb pipe and turns into an active participant in the process. That is why we call this ability to vary action and outputs based on internal decision-making self-direction.
The AI-Powered Workplace 25
■ Self-direction is the ability of an agent to vary the ways in which it achieves its goals.
The more self-direction we require of an agent, the more AI techniques we will need to employ to achieve it. We need the ability to reason about the world, review potentially large amounts of data, and decide which action to perform based on all of that.
Even then, it is not a simple on/off switch. Now you have AI; now you don’t. Everything lies on a continuum, as Figure 2-2 illustrates. The more complexity we introduce into this decision-making process, the more contextual and historical information we need to take into account, and the more AI tech- niques we will have to use.
Figure 2-2. The self-direction continuum
What is essential at this point is to appreciate that from a certain high-level process perspective, it is not a question of differing levels of complexity. From a process perspective, the task has simply been delegated to a software agent. There is a piece of code that is responsible for notifying us when a message arrives. We have delegated that decision-making to software, either through explicit rules and our preferences or through more probabilistic reasoning based on context and past behavior. The question then becomes what level of self-direction is necessary from this software in order to perform its task efficiently and usefully for us.
■ AI is the process of identifying and coding decision-making skills into software programs so that they can effectively carry out the tasks that have been delegated to them.
Autonomous Agents
The notifier agent we have been describing so far, at any level of self-direction, can only operate within the bounds of a very specific goal: to decide when and how to display a message to the user.
26 Chapter 2 | What Is AI?
There is another level of agency that is useful to consider: one where the agents do not just operate to achieve well-defined goals but can actually gen- erate their own goals.
Imagine your workplace has just been outfitted with the latest “intelligent” office energy management system. This system has a target of ensuring you don’t spend more than 100 “units” of energy per week and that the occupants of the workplace get the maximum amount of comfort out of those energy units. To achieve this target, it references all the available data, preferences, and rules around what constitutes efficient energy use and comfort and begins taking action. It begins formulating specific goals (desirable environmental states) that it wishes to achieve.
For example, it may decide that it should switch off certain devices because they seem to have been forgotten—switched on but not actively being used. It may also decide to just ever so slightly drop the office temperature so as to conserve energy. These are different goals that stem from its attempt to meet its higher level target. This is software with its “own” agenda and goals that is using whatever capabilities it has in order to fulfil that agenda.
Let us look at another example. Suppose you have a “wellness” agent whose target is to ensure that all members of a team get a chance to participate in company activities. This wellness agent is given certain capabilities such as access to and the ability to reason about people’s diaries, or the ability to map out relationships based on interactions through e-mails or in chat software. Using that information, it can then decide to act based on its findings. It will have to make decision such as:
“Do I move that project review meeting and affect the schedules of five people so that Alexis can join a yoga session, or do I have Alexis stay past a certain time in the office to do yoga?”
These are different goals driven by a higher level target. We call these higher level targets motivations, and the ability of agents to pick a goal in order to satisfy their motivations autonomy.
■ Autonomous agents generate or choose between different goals, using higher order motivations.
Autonomy describes an agent’s ability to vary its decisions about what goal to achieve. As with the other concepts discussed so far, autonomy lies on a con- tinuum. Take, for example, the wellness agent from before. We said that it can monitor diaries to ensure that everyone is participating adequately in social activities. What should it do, however, if someone is not participating in social activities? This will depend on how autonomous the agent is. It could,
The AI-Powered Workplace 27
for example, simply notify the line manager of the person in question to high- light the issue and leave it at that. Alternatively, it could decide to change an employee’s work schedule so that the employee can then take the opportu- nity to book some time for a social activity. It could change the work schedule and book the social activity without asking anyone’s permission.
As we introduce AI in our work processes, we need to carefully consider exactly how much autonomy we are providing. More autonomy means we are delegating more decision-making power to computer software, and we will potentially reap more efficiency out of it. It also means that we may suffer and have to deal with unintended consequences.
■ An informative example of this is the now infamous Microsoft Tay chatbot released on Twitter. Tay was given considerable autonomy in terms of what messages it could produce and that led to an embarrassment for Microsoft. As trolls “taught” Tay racist and extremist phrases, the chatbot used those phrases in interactions with other people. Microsoft had to recall the bot, blaming a “vulnerability”— the vulnerability was that Tay had no constraints on what it could say and no guidance as to the quality of what it learned.
Before moving on, I want to reiterate the different levels. We will use this terminology throughout the book, so it’s useful to make sure we have it all well laid out.
• Agents are software programs that have some capabilities and can effect change in their environment through those capabilities. The desired change is called a goal.
• Passive agents are ascribed goals by their user. It is the user that manipulates a passive agent’s capabilities. Most software we use behaves like passive agents.
• Active agents have an explicit representation of a goal to achieve and can manipulate their own capabilities to achieve a goal.
• Self-direction refers to the agent’s ability to vary the ways in which it achieves a goal.
• Autonomy refers to the agent’s ability to choose between different goals, in service of a higher order target or motivation.
28 Chapter 2 | What Is AI?
Agents That Learn
So far, we have talked about agents that are passive, active, and even autono- mous. Another key dimension is the agent’s ability to learn from past experiences.
Returning to the wellness agent, we can envision how, as it tries different strategies with users to get them more actively involved, it “learns” which strategies work best with which users. This adaption of its behavior to differ- ent contexts based on previous action and historical data is what we will consider as learning.
There can be any number of layers of complexity hidden behind this learning activity. The ways the agent assigns scores to different reactions and out- comes from users can become increasingly more sophisticated. It may be able to draw not just from the reaction of a single person but that of thousands of users over a long period of time. If the data grows and the variables to con- sider are multiple, it will need special tools and increasingly more sophisti- cated AI techniques to make sense of them.
Agent Communities
For the sake of completeness, let us also briefly consider multiple interacting agents. This is an aspect of automation that is not often discussed but is cru- cial. No problem can be solved by a standalone piece of software. We always need to interact and integrate with other components in order to achieve our goals. This trend will only continue to accelerate.
In the near term, it is more likely that an autonomous piece of software, for example, your Siri-like phone assistant, will interact with passive services (e.g., using a timetable API to understand when the next train is coming). However, it is reasonable to assume that this will change. We will get to the point where multiple autonomous software programs will regularly interact in our daily lives and make decisions for us. At that point, we not only have to deal with how single agents arrive at a decision but also what are the emergent behav- iors of multiple agents.
It might be a group of autonomous vehicles distributing themselves in a road network or two meeting booking agents negotiating based on their owners’ agendas. Returning to our wellness example, assume that there is an agent community with the common goal of “get employees to interact more within the company.” Each individual agent, however, has differing capabilities and motivations. One agent is the Social agent with a particular focus on social activities, while a Health agent is more concerned with helping users maintain a healthy lifestyle in and out of work. The Social agent may suggest that a user should take one less trip to the gym and spend that time instead doing a more
The AI-Powered Workplace 29
social activity. The Health agent would then have to weigh the pros and cons of this. Perhaps they even enter a negotiation to decide how to settle the issue. They might settle on setting up a game of tennis to satisfy both social and health needs!
A lot of research in agent-based computing focuses on how to get agents to coordinate to collectively solve problems, and how we can reason about the behavior of a group of agents. The more heterogeneous the types of agents interacting, the more complex the problems can become.
Moving Beyond Intelligence
In this chapter we explored the very concept of AI, provided some examples of the challenges around trying to pin it to any single definition, and also dis- tinguished between general AI and domain-specific AI.
We then referred to agent-based computing as a source of a conceptual grounding for thinking about AI, with a focus on the observable behavior and not the specific techniques that will allow us to create those behaviors. This grounding allows us to consider the task at hand, the delegation of decision- making to machines, without having to make explicit reference to notions of intelligence. Moving past vague notions of intelligence clarifies thinking.
At the same time, the agent-based perspective we introduced here needs some time to embed. Take an application you consider to be an example of AI and try to classify it from the viewpoint of agents. What goals is it trying to achieve? What information is it using? What decisions can it make? To what extent can it autonomously effect change? Does it interact with any other applications? In what ways does it interact?
I find these sorts of exercises extremely useful to start reasoning about prob- lems and the extent to which we are really delegating decision-making power and automating. The more confident you become in analyzing problems through this lens, the more adept you will be in talking with AI practitioners about what you need to see happen.
Your business goal is unlikely to ever be to have an application built that uses the latest neural network architecture.5 It will hopefully be defined according to a specific problem you are facing in the workplace and associated to clear objectives about how to improve the way things get done. That is the what we most care about. Concepts such as active and passive agency, self-direction, and autonomy help us capture the what in clear terms across a range of dif- ferent domains. The how comes next and will be the subject of the next couple of chapters.
5 Unless you are looking to raise VC money, that is!
C H A P T E R
3
Building
AI-Powered
Applications
What does it mean to build an AI-powered application? In the previous chapter we started shaping our thinking around the types of behavior that AI software may exhibit, such as the proactive accomplishment of goals and autonomous goal setting. We did not, however, discuss how such behavior is achieved. We purposely did not refer to any specific technology. Technologies, of course, do matter. Technologies, though, are also con- stantly changing. That is why it was crucial to be able to think about AI applications without reference to specific technologies. At the same time, we need to be able to consider what technologies may be required in order to make informed choices. This chapter begins to lay the foundations in that direction. It digs deeper into the question of how AI-powered applications are constructed, and it attempts to do this in a way that hopefully anyone should be able to follow.
AI is an incredibly fast-moving space; the buzzwords and trends of today will not necessarily be the same ones of tomorrow. We could do a deep dive into the most fashionable machine learning techniques, talk about the finer details
NEXT CHAPTER
Comments
Post a Comment