The not-so-hidden FTC guide to organizational use of artificial intelligence (AI), from data collection to model audits – tech
Our last article on AI on this blog, the New frontier (if decidedly not “ final ”) of the regulation of artificial intelligence, mentioned both the Federal Trade Commission (FTC) April 19, 2021, AI guide and the European Commission AI regulation proposal. The FTC’s 2021 guidance referred, in large part, to the FTC’s April 2020 message “Use of artificial intelligence and algorithms“The recent FTC guidelines also built on past FTC work on AI, including a January 2016 report,”Big Data: a tool for inclusion or exclusion?, “which in turn attended a workshop on September 15, 2014 on the same topic. The Big Data workshop addressed data modeling, data mining and analysis, and gave us a forward-looking overview of what would become an FTC AI strategy.
The FTC guidelines start with data, and the 2016 guidelines on big data and further development of AI address this more directly. The 2020 guidelines then highlight important principles such as transparency, explicability, fairness, accuracy and accountability that organizations must take into account. And the 2021 guidelines explain how consent or acceptance mechanisms work when an organization collects data used for model development.
Taken together, the three sets of FTC guidance – the 2021, 2020, and 2016 guidance? provide an overview of the FTC’s approach to organizational use of AI, which covers much of the data lifecycle, including creation, refinement, use, and back-audit end of the AI. Overall, the various elements of the FTC guidelines also provide a multi-step process for what the FTC appears to consider responsible use of AI. In this article, we summarize our takeaways from FTC AI guidance throughout the data lifecycle to provide a practical approach to responsible AI deployment.
– The evaluation of a dataset should assess the quality of the data (including accuracy, completeness and representativeness)? and if the dataset does not contain certain demographic data, the organization should take appropriate action to address and remedy this issue (2016).
– An organization should honor promises made to consumers and provide consumers with substantive information about the organization’s data practices when collecting information for AI (2016). Any opt-in mechanism associated with such data collection should work as directed to consumers (2021).
– An organization should recognize the data compilation step as a “descriptive activity”, which the FTC defines as a process of discovering and summarizing “patterns or characteristics that exist in data sets” – a reference to data mining exchange (2016) (note that the referenced FTC documents originally on mmds.org are now redirected).
– Compilation efforts should be organized around a lifecycle model that provides for compilation and consolidation before moving on to data exploration, analysis and use (2016).
– An organization must recognize that there may be uncorrected biases in the underlying consumption data that will appear in a compilation; therefore, an organization should examine datasets to ensure that hidden biases do not create unintended discriminatory impacts (2016).
– An organization must maintain reasonable security on consumer data (2016).
– If data is collected from individuals in a misleading or inappropriate manner, the organization may need to delete the data (2021).
Selection of AI models and applications
– An organization should recognize the stage of AI model selection and application as a predictive activity, where an organization uses “statistical models to generate new data” – a reference to predictive analytics scholarship (2016).
– An organization needs to determine whether a proposed data model or application correctly accounts for bias (2016). Where there are gaps in the data model, the use of the model should be limited accordingly (2021).
– Organizations that build AI models may “not sell their big data analytics products to customers if they know or have reason to know that those customers will use the products for fraudulent or discriminatory purposes.” An organization should, therefore, assess the potential limitations of providing or using AI applications to ensure that the use of the application has an “eligible purpose” (2016).
– Finally, as a general rule, the FTC says that under FTC law, a practice is patently unfair if it causes more harm than good (2021).
Development of a model
– Organizations should design models to account for data gaps (2021).
– Organizations should consider whether their use of particular AI models raises ethical or fairness concerns (2016).
– Organizations must consider the end uses of models and cannot create, market or sell “insights” used for fraudulent or discriminatory purposes (2016).
Model testing and refinement
– Organizations should test the algorithm before use (2021). These tests should include an assessment of the results of the AI (2020).
– Organizations need to consider the accuracy of predictions when using “big data” (2016).
– Model evaluation should focus on both inputs
and AI models cannot discriminate against a protected class (2020).
– The evaluation of inputs should include considerations on ethnic factors or indicators of these factors.
– Evaluating the results is essential for all models, including models with a neutral face.
– Model evaluation should consider alternative models, as the FTC may challenge models if a less discriminatory alternative achieves the same results (2020).
– If the data is collected from individuals in a deceptive, unfair or illegal manner, removal of any AI models or algorithms developed from the data may also be required (2021).
Front-end consumer and user disclosures
– Organizations must be transparent and not mislead consumers “about the nature of the interaction”? and not use fake “hiring profiles” as part of their AI services (2020).
– Organizations cannot overstate the effectiveness of an AI model or misinform consumers about whether AI results are fair or unbiased. AI misrepresentation is prosecutable (2021), according to the FTC.
– If algorithms are used to assign scores to consumers, an organization should disclose the key factors that affect the score, ranked in order of importance (2020).
– Organizations providing certain types of reports through AI services must also provide notices to users of these reports (2016).
– Organizations that build AI models based on consumer data must, at least in certain circumstances, provide consumers with access to information supporting AI models (2016).
Consumer and User Background Disclosures
– Automated decisions based on third-party data may require the organization using the third-party data to provide the consumer with a notice of “adverse action” (for example, if under the Fair Credit Reporting Act 15 USC § 1681 (Rev. Sep 2018), such decisions deny an applicant an apartment or make him pay a higher rent) (2020).
– General statements “you do not meet our criteria” are not sufficient. The FTC expects end users to know what specific data is used in the AI model and How? ‘Or’ What the data is used by the AI model to make a decision (2020).
– Organizations that change specific conditions of offers on the basis of automated systems must disclose the changes and reasoning to consumers (2020).
– Organizations should provide consumers with the ability to modify or supplement information used to make decisions about them (2020) and allow consumers to correct errors or inaccuracies in their personal information (2016).
– When deploying models, organizations should confirm that AI models have been validated to ensure that they work as intended and do not unlawfully discriminate (2020).
– Organizations should carefully assess and select an appropriate AI accountability mechanism, transparency framework and / or independent standard, and implement where appropriate (2020).
– An organization should determine the fairness of an AI model by considering whether the particular model causes, or is likely to cause, substantial harm to consumers that is not reasonably avoidable and not offset by compensatory benefits (2021 ).
– Organizations should periodically test AI models to revalidate that they work as intended (2020) and to ensure that there are no discriminatory effects (2021).
– Organizations should be accountable for compliance, ethics, fairness and equality when using AI models, taking into account four key questions (2016; 2020):
– How representative is the dataset?
– Does the AI model take bias into account?
– How accurate are the AI’s predictions?
– Does the use of the dataset raise ethical or fairness issues?
– Organizations should embrace transparency and independence, which can be achieved in part through the following (2021):
– Use audit processes and independent third-party auditors, which are immune to the intent of the AI model.
– Ensure that AI datasets and source code are open for external inspection.
– Apply recognized AI transparency frameworks, accountability mechanisms and appropriate independent standards.
– Publication of the results of third-party AI audits.
– Organizations remain accountable throughout the AI data lifecycle in line with the FTC recommendations for AI transparency and independence (2021).
The content of this article is intended to provide a general guide on the subject. Specialist advice should be sought regarding your particular situation.