AI For Good: Who’s Who

As a developer, I spend most of my time at a desk, reading source code, running debuggers, and searching Stack Overflow for solutions to hard-to-code problems. The best part of the Salesforce.org fellowship I recently did was a chance to get away from the code and discover who is leading the call for responsible use of AI. Organizations at all levels, from local to global, and from academic to industrial, government and activist, have an interest in artificial intelligence.
I was also surprised to discover that many of these organizations promote codes of conduct for the builders of AI. I think we’re starting to see Asimov’s Laws of Robotics move from science fiction to ethical principles to guide AI.
Fast Forward
Fast Forward operates an investment fund to help other nonprofit technology companies get started. Founded by Shannon Farley and Kevin Barenblat in 2014, Fast Forward only invests in technologies with a positive social impact. In 2017, Fast Forward began backing AI projects, including Raheem.ai, a chatbot for helping citizens provide feedback about interactions with police. Raheem’s creators believe that most people would rather talk to their chatbot than turn up at a police station and file a written report. A pilot program in San Francisco and Berkeley collected twice as much data in three months as both cities collected in a year, proving their point.
Fast Forward represents an important component in the non-profit landscape: the financial backing for organizations that want to produce new public-interest technologies, and (we hope) more technologies using AI. Later, we’ll discuss why nonprofit funding will cover AI research that private enterprise might skip over.
College Forward
Founded in 2004, College Forward operates mentoring programs to help disadvantaged students get admitted to college. In 2010, College Forward launched CoPilot to augment mentoring with databases and analytics. In 2014, they opened CoPilot for use by other institutions on the Salesforce App Exchange, and it now serves over 150,000 students. They are presently investigating data science techniques to help identify students who could benefit from direct coaching.
College Forward represents a common pattern in data-driven human services: a case management database, and a number of people ready to be the “boots on the ground.” Despite the richness of the database, it’s not always easy to figure out the specific needs of clients, or to prioritize which people to send to their aid. This pattern appears in many of FastForward’s partners, and is omnipresent in higher education, K-12, health care, and government social agencies. In a later article, we’ll use this pattern as a roadmap for nonprofits that are looking for a way to do data science, and how it can later be augmented with AI.
The Governance Lab
The Governance Lab is a research organization at New York University that investigates the intersection of technology and institutions, particularly governments. They divide their projects into two broad groups: “Smarter Governance through Data,” including lecture series on data science and studies of the use of data science at the global level; and “Smarter Governance through People-Led Innovation,” which emphasize crowdsourcing solutions and increasing civic engagement, especially at the municipal level.
In addition to their own publications, GovLab produces an RSS feed of blog articles, and over the past two years, there’s been an increasing number of stories related to data science and AI. Anyone wanting to take the pulse of government and AI should be following this feed.
AINOW
AINow was founded by NYU researchers Kate Crawford (also of Microsoft Research) and Meredith Whittaker (also of Google’s Open Research group). This organization is focused on studying the impact that artificial intelligence is already having on society. Their stated focus areas are Rights and Liberties; Labor and Automation; Bias and Inclusion; and Safety and Critical Infrastructure.
The 2016 Symposium report enumerates the discovered problems and recommends solutions, including #6 “Work with representatives and members of communities impacted by [AI] to codesign AI with accountability, in collaboration with those developing and deploying such systems,” and #8 “Work with professional organizations (…) to update (or create) professional codes of ethics that better reflect the complexity of deploying AI and automated systems within social and economic domains.” This is one of many calls for ethical codes to guide AI development, and a principle that we’ve adopted in our own ‘Nine Positive Steps‘ article as well as Part I and Part II of a series by Kathy Baxter on how to build ethics into AI.
The American Civil Liberties Union
The ACLU was founded in 1920 with the stated mission: “to defend and preserve the individual rights and liberties” guaranteed in the US Constitution. The ACLU may be best known for lobbying and advocacy, and especially for Supreme Court cases, including 10 in 2017 alone. The ACLU and AINow have established a partnership. ACLU attorneys in the Racial Justice Program, and in particular ACLU staff attorney Rachel Goodman, are investigating the economic and legal impact of AI.
Among its concerns are the Computer Fraud and Abuse Act of 1986 (CFAA), which potentially criminalizes any use of an online service that its owners find objectionable. The ACLU argues that acceptable use policies cannot prevent researchers from testing whether the program is discriminatory. Goodman represents the plaintiffs in Sandvig v. Lynch, which represents a constitutional challenge to the CFAA.
AI4ALL
AI4All is a project at Stanford University to improve diversity in AI through a special course of study offered to students from groups traditionally under-represented in the tech field. AI4All was founded by Stanford professor Fei-Fei Li and Princeton professor Olga Russakovsky, with support from Melinda Gates of the Gates Foundation and Jensen Huang of NVidia.
AI4All directly addresses #7 of AINow’s 2016 recommendations: “Increase efforts to improve diversity among AI developers and researchers, and broaden and incorporate the full range of perspectives, contexts, and disciplinary backgrounds into the development of AI systems.” Fei-Fei Li particularly wants to change the Silicon Valley perception that “Tech is a guy with a hoodie.”
AI4ALL has already run for three years at Stanford (as the SAILORS program) for young women in the ninth grade. In 2018, the program will expand to five other universities with programs of one to three weeks and students of grade levels from 9 to 11.
The Association for the Advancement of Artificial Intelligence
AAAI is an academic organization, founded in 1979, analogous to the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE). It was founded to promote research into, and responsible use of, AI. Its founding reflects the fanciful nature of AI at the time. In fact, the 1970’s are called an “AI Winter,” because decades of lofty expectations and mediocre results led to a decrease in funding and academic interest overall.
Today, AAAI is a leading academic organization for AI research. The AAAI operates several academic journals covering AI and organizes several conferences, including the longstanding annual Conference on Artificial Intelligence, now in its 32nd year.
Partnership on Artificial Intelligence
The Partnership on Artificial Intelligence to Benefit People and Society was founded in 2016 by Eric Horvitz (MD and PhD), former head of Microsoft Research, Director of Microsoft Research Labs, and Mustafa Suleyman, co-founder of DeepMind. The initial partners were Amazon, Facebook, Google, Deepmind, IBM, and Microsoft. Subsequent waves of outreach added several of the preceding organizations – AINOW, AI4All, AAAI, and the ACLU, as well as many other leading NGOs and technology firms. The wave of outreach brings most of the major AI organizations under the Partnership’s umbrella; Salesforce joined the Partnership in May of 2017.
The Partnership’s stated goal is to establish best practices for the pursuit of AI. Their list of founders shows a focus on the industrial side of AI. It may shape up into a trade association for AI, as JEDEC is for electronics manufacturers; or it may turn into a meta-technical organization, like the Internet Engineering Task Force, which publishes specifications that shape the entire Internet.
Government agencies (like the White House Office of Science and Technology Policy) and international regulatory agencies (like the International Telecommunications Union, which held its own “AI for Good” summit in July 2017), are notably absent from the Partnership. However, given their partners and stated goals, we can assume that the Partnership will be in regular communication with governments and regulators in the near future.
The most interesting feature of the Partnership is their Thematic Pillars, of which there are presently seven: Safety-Critical AI; Fair, Transparent, and Accountable AI; Collaborations Between People and AI Systems; AI, Labor, and the Economy; Social and Societal Influences of AI; AI and Social Good; and Special Initiatives. These seven principles are a fair cross-section of the goals for socially-beneficial AI, and the areas where AI is most likely to generate an impact.
The Ethical Codes
As we have seen, organizations pursuing beneficial AI, will often publish ethical codes and guides for its builders. There are three milestones among these codes: the ACM Code of Ethics established in 1992, and the Future of Life Foundation Open Letter on AI from 2015, and Asilomar Principles from 2017.
ACM Code of Ethics
In 1992, the Association for Computing Machinery (ACM) published a Code of Ethics for software engineers. The ACM formulated this code just before the “gold rush” era of the Internet, although it has not gotten a lot of public attention. The author believes that many de-facto practices of the Gold Rush era are (at best!) questionable in light of the Code, particularly with respect to being trustworthy and respecting privacy.
These principles are particularly relevant:
- “1.2 Avoid harm to others”, which explicitly states a responsibility to avoid harm to the general public, and to take action to mitigate unintended consequences that result in harm. The ACM’s definition also requires avoiding destruction of the environment as a consequence of one’s work.
- “1.4 Be fair and take action not to discriminate.” In the era of data science, we believe this must also include understanding the biases (both statistical and social) that are present in every data set and taking action to protect those who would be harmed by that bias.
- “1.7 Respect the privacy of others.” It’s been said that, if you’re not paying for the product, then you ARE the product, and Internet users are used to waiving their privacy rights to obtain access to social media or ad-supported applications. This is a dangerous assumption to make when AI can infer personal information at a level that was previously impossible, and an increasing risk when the Internet of Things (IoT) is putting Internet-enabled sensors – essentially, an ubiquitous surveillance network – into everything from traffic lights to children’s toys.
- “2.5 Give comprehensive and thorough evaluations of computer systems and their impacts, including analysis of possible risks.” In particular, this means understanding the error rates (false positive and false negative) of any AI system, and the consequences (especially potential harm) of either type of error.
We note that this code is one of the inspiration for our previous “Nine Positive Steps” article. While that article was shorter and more specific to AI than the ACM Code, we believe that good-faith adherence to either code, implies good-faith adherence to most of the other as well.
The Future of Life Institute: “An Open Letter” and The Asilomar Principles
The Future of Life Institute (FLI) is a nonprofit organization founded in 2014. Their discussions are broad-reaching and cover a number of global threats arising from the misuse of technology, including biotechnology and climate change, but mostly focusing on AI and robotics. FLI’s most notable productions are several open letters on AI and autonomous weapons systems, which encourage researchers to work on safe and robust AI that is an overall benefit to mankind.
While the overall theme of FLI looks academic, the 8000+ signatories to their “Open Letter on Artificial Intelligence” include a number of industrial leaders, particularly Elon Musk, technology billionaire; Eric Horvitz, founder of the Partnership on AI; and science communicators Morgan Freeman and Alan Alda. Academic signers include Stuart Russell and Peter Norvig, who literally wrote the book on AI; Geoffrey Hinton, whose 2006 breakthroughs in neural networks led directly to the Deep Learning revolution; and cosmologist Steven Hawking.
The Letter was followed by a 2017 conference. This conference produced a 23-point statement of Principles for developers of AI that expanded on the sentiment and scope of the Open Letter. The Principles include “Shared Benefit”, “Shared Prosperity”, and avoidance of an AI arms race. The letter also specifies that:
“The goal of AI research should be to create not undirected intelligence, but beneficial intelligence”
And that investments in AI should include funding for studying the ethical use of such inventions. Many of the signers of the original Open Letter have also signed the Asilomar Principles, and the Institute has welcomed many new signatories since, including Cheryl Porro, Salesforce.org SVP of Technology and Products.
I think that the call for funding of ethical studies comes from the likelihood that (in the absence of regulations) the industry will consider such research unprofitable and refuse to pay for it. The reader may even agree with that decision, and claim that we should let the market solve this problem. I personally take it as evidence for the following theory:
Nonprofit organizations are critical to establishing safe and beneficial AI.
We also ask the reader to search for any of the recent breaches of personal data by organizations that should know better, the old plague of email spam, and the present plague of spyware (including the legal variety) and consider for yourself whether the industry has earned your trust.
Asimov’s Laws of Robotics and The Future of Computable Morality
When discussing codes of conduct for AI, we’d be remiss if we didn’t mention the fictional (for now!) Three Laws of Robotics.
Issac Asimov was a science fiction writer who envisioned a future where robots would coexist with mankind. He was writing at a time when robot rebellion stories were common, but his behavioral restrictions, or “Laws”, ensured that robots would work to help humanity and not to destroy it. These Laws became the driving element of the short stories that would later be collected into “I, Robot”, and also featured in most of his later works.
These are Asimov’s Three Laws:
-
1. A robot must never harm, or through omission of action, allow to be harmed, a human being.
2. A robot must obey the orders of a human being, unless those orders contradict the First Law.
3. A robot must protect its own existence, unless doing so contradicts the First or Second Laws.
We don’t use the Three Laws today, because our machines simply don’t have the level of cognition necessary to obey (or even disobey) rules at that level of abstraction – it would be like asking a Roomba to compose poetry about carpets. So it falls to us, as programmers, to choose to enforce the Three Laws as best as we are able.
Ethical guides like the ACM Code and the Asilomar Principles are encouraging progress. Given the potential of AI, our future may depend on our ability to instill morality in machines. In one sense, these are expansions from Asimov’s Laws into a more specific form that may one day encode a mechanical conscience. In another sense, perhaps experience will one day show that these codes were all pursuing the Laws of Robotics under other names.
That brings us to Asimov’s later addition of the “Zeroth Law”: A robot must never harm, or through omission of action, allow to be harmed, humanity. “Humanity” is even harder to codify than “human being”, and as such, requires an even higher level of cognition. I think that most humans would themselves find such a level of judgement and foresight hard to achieve, but it’s worth aiming for. We can only hope that someday we will express the Zeroth Law as software, because that will be the most important computer program ever written.
The author thanks Hallie Parry of Salesforce.com for introducing Asimov’s Laws of Robotics into the discussion, and Katharine Bierce at Salesforce.org for her helpful edits.
About the Author
Phil Nadeau is a lead member of the technical staff at Salesforce. In 2017, he was a Salesforce.org Technology Fellow in Artificial Intelligence. He started using Linux 25 years ago and has been working in software development for almost as long. Phil has written tens of thousands of lines of code with the LAMP stack (Linux, Apache, MySQL, Perl), Java, C, and a variety of other languages. In 2012 he graduated from Western Washington University with a Masters of Science. He enjoys helping make sense of the Internet, using Java, Scala, Spark and Python primarily for his work in search engineering. One of the highlights of his career was as a programmer on an experiment in machine vision at Bell Laboratories for controlling video games using a motion capture system made from vintage Silicon Graphics workstations and old analog video cameras. For previous work, see Phil’s blog, Why You Should Care about AI and Where Did AI Come From.
You Might Also Like

International Women's Day is a time to celebrate the accomplishments of women and girls around the world. It’s a time…

Learn more about the work of the Global Fund and why the focus on global health is still critical to…

During Black History Month, Salesforce is shining a light on some of our nonprofit customers who support and work with…