What is the future trajectory of artificial intelligence? Does AI need a moral and ethical framework? If so, should this be shaped by regulation? Expert Harry Glaser provides his thoughts.
Harry Glaser, CEO and co-founder of Periscope Data, is well versed to discuss the significance of artificial intelligence on society. Glaser recently presented at the Strata Data conference to discuss this topic. His presentation, "An Ethical Foundation for the AI-Driven Future" focused on the moral role that data teams need to undertake in order to ensure that artificial intelligence and machine learning do not deliver "immoral outcomes".Glaser's view is that organizations who have data professionals acting as a "Chief Conscience" for the company are much more likely to ensure that data and algorithms are used correctly. Glaser's company Periscope Data provides the mechanism to bring data teams and together with their stakeholders onto a single and unified platform.
To probe the major questions arising from the era of artificial intelligence, Digital Journal spoke with Harry Glaser.
Digital Journal: How widespread is artificial intelligence becoming?
Harry Glaser: Whether it’s a big public company or a startup, the past year has seen a huge increase in AI adoption across all industries. Academia, nonprofits and businesses are starting to understand the benefits of AI adoption. They see how it can affect their bottom line, be an engine for business efficiency and drive competitive advantage. Adoption hasn’t just been limited to technology companies. We’re seeing a trend that began in the tech sector and is now expanding to every other industry.
That said, we work with more than 1,000 teams of data professionals, and every organization is at a different stage of adoption right now. Some companies have deeply ingrained AI and ML technologies into their workflows and made serious business impacts, while many others are still in the experimental phase with investments in data scientists behind the scenes. And of course, there are definitely plenty of companies who think they are using AI in an impactful way, but haven’t invested in the right data experts to properly translate those results into action.
DJ: How ‘intelligent’ is most AI?
Glaser: Today’s AI is certainly very technically savvy, but AI is only as intelligent as the humans behind it. And there’s often a big gap between those who are capable of building and supporting AI models and the business decision makers who need to apply that information.
Data from AI and ML just isn’t something that executives can model on their own or analyze easily – it involves heavy use of disparate data sources, distinct analysis tools and technical languages. As a result, translating information from AI into something these leaders can use is one of the most important issues businesses should be addressing today.
Here’s what happens if they can’t do that: data teams end up building incredible prediction models, but instead of having the authority to translate those insights into action, they do little more than present them to other teams but remain isolated from the decision-making process. These projects are too often siloed in the data science realm, excluding other teams.
DJ: Are consumers accepting of AI?
Glaser: There’s been a wave of fascination and fear of AI from consumers that you can see playing out with early adopters like Facebook and Google in congressional hearings, in Medium posts, and on editorial pages across the country. Many consumers don’t actually understand the technology and its impact that it’s had on their daily lives because they can’t see “it” but, they certainly feel the effects. The implications of AI on daily life have never been more important or complex, but there is one thing we can be sure of, it’s here to stay.
DJ: Does AI present data privacy challenges?
Glaser: Like any new technology, once it’s released into the wild and gaining traction, there will be a learning curve for companies who implement AI. It’s the responsibility of data scientists and developers to be the arbiters of privacy protection for consumers. That responsibility rests on the shoulders of the data analysts and engineers building these algorithms. It’s easy to jump to conclusions and speculate on the negative effects of AI adoption – there have been many films and TV shows tackling that topic, adding to the hysterics playing out in the public conscience.
DJ: Does AI need a moral and ethical framework?
Glaser: Absolutely. Think about all the examples we’ve seen of companies who have been blind to the implications of their AI processes until the damage was done. Their problems could have been mitigated by looking beyond short-term gains toward long-term outcomes. Too many times, there’s not a “human governor” of these AI systems who is thinking hard enough about the impact of unchecked, AI-driven outcomes.
One example that’s been widely discussed this year: at the Artificial Intelligence, Ethics and Society conference in New Orleans in February, one presenter discussed an algorithm that could identify whether a crime was gang-related just by knowing the the weapon used, the number of suspects, the neighborhood, and the type of location of the incident (i.e. an alley or street corner).
This is a powerful use of AI, but the presenters showed no interest in participating in the conversations about its potential side effects or bias, choosing to answer a question about ethical concerns with, “I am just an engineer.” Unfortunately, there’s a perception among many engineers and data experts that someone else will handle the proper, moral implementation of their technology. Too often that’s not the case, and it's dangerous and naive to assume otherwise.
DJ: What measures need to be taken to ensure AI is ‘ethical’?
Glaser: Data teams need to be the moral and ethical compass of AI. Beyond just governing the AI and machine learning processes to ensure they are accurate, it’s the responsibility of those data professionals to understand the broader societal and cultural impact of their work, and the potential for harm that comes from it.
Companies need to empower their data scientists to knock down the barriers between business users and the data they need, to serve as an ethical barrier to ensure algorithms don't deliver immoral outcomes, and to be aware of the responsibilities that come with the power of their algorithms. The executive team can and will not fill this role, because they don’t have the skill and expertise to question the AI/ML systems.
Things that can be done by machines should be done by machines – efficiency demands that. But on the flip side, everything that needs to be done by humans should be done by humans. Ethics, judgement, creativity and compassion require a human touch.
DJ: What examples are there of AI being used for social good?
Glaser: My favorite example is our customer Crisis Text Line, a suicide hotline that saves lives by staffing teams of operators that respond to texts from people in crisis. Using natural language processing & predictive analytics, they analyze conversations based on common keywords and tags, which enables them to forecast crisis trends and train counselors to have impactful conversations with texters.
That’s an example of a company whose core mission and values are socially valuable, and who are using AI and advanced analytics to further that mission. But there are also many companies out there who are using AI responsibly and thoughtfully, in a way that doesn’t negatively impact their customers or the public, but still helps them make intelligent decisions about how to invest their resources and improve their revenue.
DJ: What services does Periscope Data provide?
Glaser: Periscope Data has built an analytics platform that is bringing more speed, power and flexibility to the world of BI. We’ve found that data teams work best when they can create a single source of truth for data across the organization, so we’ve created Periscope Data with their workflows in mind. Within Periscope Data, you can ingest, store, analyze, visualize and report on data all from our single connected platform.
We’ve made investments this year that are focused on helping teams of all sizes answer their most advanced data questions. That includes everything from visual data exploration and core analytics reporting features to machine learning and predictive modeling. We want to help businesses truly empower their data professionals and turn them into superheroes across the business.
DJ: What types of organizations do you work with?
Glaser: We have more than 1,000 data teams working with us, many of whom spend more than 20 hours per week in our platform to ensure data plays a part in every decision they make. That includes large Fortune 500 companies and smaller startups alike – customers have found that our platform supports them well as they scale so we’re a fit for companies of all sizes.
We also have a large ecosystem of ETL partners, integrators and native connectors who make it possible to bring all of a companies’ data sources together in a single place. That includes strategic partners like AWS and Snowflake, alliances with FiveTran, Alooma and a bevy of supported data warehouses and sources.