Clarkston Consulting https://www.facebook.com/ClarkstonConsulting https://twitter.com/Clarkston_Inc https://www.linkedin.com/company/clarkston-consulting http://plus.google.com/112636148091952451172 https://www.youtube.com/user/ClarkstonInc
Skip to content

The Trouble with Algorithms: Algorithm Ethics

Algorithm ethics is becoming a critical issue across businesses of every shape, size, and type as artificial intelligence and machine learning systems are being integrated into standard business workflows. These machines, originally thought to perform only menial tasks, have sophisticated rapidly with technological advances. Now, AI and ML are being adopted heavily. In fact, according to the technology research firm IDC, worldwide business spending on AI is expected to hit $50 billion this year and $110 billion annually by 2024. These numbers have not showed much change even after the global economic slump caused by the COVID-19 pandemic.

Though it is exciting that these machines are rapidly being added to many industries, the approach to algorithm ethics and regulations must keep up with that pace. In research and in practice, people have found many ethical problems. Companies should focus on understanding what these problems are, why they exist, and come up with safeguards to prevent any trouble with algorithms.

Navigating Algorithm Ethics Challenges

The algorithm ethics problems in AI and ML and their impacts can differ based on industry. Since these technologies are so versatile, it makes it hard to pinpoint exactly what a company should expect. However, there are some overarching patterns of which companies should be aware.

Bias and discrimination: These algorithms are capable of comprehending much more information, at a much faster rate, than humans. However, this does not mean the conclusions are any more neutral or just. For example, AI being used to predict future criminals showed a bias and elevated the potential for racial discrimination. This bias can also be translated into the workplace, especially through new HR recruiting software. Though these may help eliminate human bias in the recruiting process, AI can still make unfair judgements. Remember, humans are behind the codes and systems which can create an inherent bias. If we want to make the algorithms fairer, we must consciously check our biases and make sure they do not infiltrate our algorithms.

Privacy concerns: Privacy is a major concern already, as data on you can be collected from seemingly anywhere. The emergence of new algorithms means these existing concerns will just be amplified. AI enables companies to track what people are doing and predict what they’ll do next. For example, social media sites and streaming services are able to push content you are likely to engage with based on previous clicks and plays even you may have forgotten. Having your every action tracked and having profiles built about you can be concerning. The pervasiveness of AI-enabled knowledge will continue to grow. The question is now: How will companies use this data? While some regulators are trying to implement and enforce legislation to protect user privacy, this is a continuously evolving space with implications for most businesses.

Transparency in decision making: The goal of advanced algorithms is to have them make decisions based on information and calculations that humans cannot compute themselves. However, as these intelligent machines get smarter, humans may no longer understand how they make their final conclusions. Therefore, it will be hard to identify what steps the algorithm need to be fixed or amended to get better results. Ideally, AI and ML progress will be traceable and explainable so human oversight can be implemented. This allows for human judgement to correct for wrong assumptions or faulty conclusions. Different applications have different tradeoffs on interpretability vs. accuracy. some complex models may give very accurate results in testing but are considered ‘black box’ models that can’t be explained at the most detailed level. While in some applications that’s acceptable for others the algorithm inputs need to be explainable, particularly for the safety of children and teens as in the case of Facebook.

Setting up Safeguards to Prevent Potential Problems

 The use of AI and ML are still relatively new in business, so it may be hard to predict what the best practices are in using these technologies. But we can set up safeguards that can track how we use the algorithms and set up a process for dealing with potential problems.

The main goal for safeguarding against advanced algorithms is to maintain human oversight at all stages of implementation. Unless we are confident that machines can work with human sensitivity and empathy, it is important that companies are allowing people to make the final call.

AI Review Boards: Businesses who are looking to implement AI in their workflow should set up AI review boards. These board members will be responsible for making sure integration of AI and ML is seamless in the company’s decision-making processes. They will also be responsible for the procedures used within the company when developing, adopting, or deploying and AI-related products. An important aspect of these review boards is diversity. The board members should not only be diverse in background, but also in terms of their stakeholder position within the company.

Companies can set up these review boards in a process similar to how DevOps reviews code. Just as software engineering best practice is to have code reviews and thorough testing before code makes it to production, the same is true of AI and ML models. the work should be verified and thoroughly tested.  ML Ops actually requires further validation and ongoing model maintenance. DevOps you build a solution then it works. Model Ops you would build a solution and then have to monitor it over time because the data is always new and changing so the model may not work as expected a year from now

Code of Ethics: As with any ethical decision-making strategy, companies should construct a formal code of ethics with which to abide. This code should be thorough in laying out core principles and values of the company, any processes, and how to address algorithm ethics issues in AI development. Once developed this code should be made accessible to all employees and other company stakeholders. External parties should be able to access the code as well so they can see how the company is being proactive in their adoption of AI.

AI Audit Trails: Finally, to deal with future problems, companies should have AI audit trails. These audit trails are necessary in boosting explainability and transparency in how the algorithms are being developed.  Not only will this help outside parties understand how AI is being used within the firm, but also internal employees to track progress and potential missteps within the process. These audit trails may be especially relevant if there are any consumer harm or product liability cases that arise because of AI use.

Subscribe to Clarkston's Insights

  • I'm interested in...
  • Clarkston Consulting requests your information to share our research and content with you.

    You may unsubscribe from these communications at any time.

  • This field is for validation purposes and should be left unchanged.

Coauthor and contributions by Maggie Wong

Tags: Advanced Analytics, Analytics, Artificial Intelligence, Data & Analytics
RELATED INSIGHTS