The mass adoption of AI in our everyday life poses lots of threats to every individual in spite of the AI’s abilities to solve problems within a short period from large amounts of data which would not have been possible with the limited abilities of man.
The ever-growing use and demand of AI in health, education, business, transportation, security, agriculture, and the likes requires that these AI systems be regulated.
Responsible machine learning is an effort to policing AI systems such that these systems meet human social norms( humans’ ethical judgments). This ensures that we build trust and confidence in AI models centered around human ethics.
Areas to look at in ensuring responsible ML: explainable, accountability, transparency, privacy & robustness, safety & reliability, and fairness.
Accountability: This ensures that every person involved in the pipeline creation process of the AI should be at any time be held accountable for the decision made by the AI and its impact on the society/ world. There should be some sort of traceability to trace issues to the source through the AI life cycle.
Transparency: This reinforces trust through disclosure. Examples are laws that require that users have the right to access their data and also know the basis for which an AI system made a particular prediction on them. Users should also know if they are actually chatting/ communicating with a bot or a human being.
IBM OpenScale ensures that models are monitored so that developers are able to know and understand why a model made a particular prediction.
Privacy & Robustness: This looks at safeguarding consumer’s privacy and data rights. Homomorphic encryption is a great technology that would help secure users’ data from been leaked. This type of technology allows data to be read without necessarily decrypting it hence third parties have access to the relevant data only and nothing more. This is suitable for use in the health and finance sectors.
Explainable: The AI system should be able to allow for checking of inferences or decisions it makes. For example, why it chooses to employ people having Winifred as their names or give loans to a certain class of men over women in spite of both parties having similar qualifications.
Safety & Reliability: How safe is users of your AI system. An example is self-driving cars and the likes. Can we rely on them to autopilot us safely to our destinations?
Fairness: This ensures that models’ are fair across a whole population hence, encouraging inclusive society whiles eliminating inequalities in employment, loaning, disability, and the likes. A system to moderate or check for biases would be a great savior.
Bias
Bias in models occurs when models make inferences that are discriminatory to certain groups/underrepresented groups. This could be seen in ages, nationality, sex, gender, and race.
Intentional Bias: This form of bias is associated with the influence of the creators/ engineers in the creation of the AI throughout its pipelines. If the creators are racist or stereotypical then the model could be made to behave as such on inferences made.
Unintentional Bias: This form of bias is usually associated with the type and availability of data. Garbage In Garbage Out.
Recommended actions to take:
Teams should ensure that they know and understand their institution’s guidelines and are abreast with national and international regulations.
The AI system should be human-centric( aligns with the norms and values of all user groups).
Documentation and keeping of very detailed records of the institutions’ design and decision-making processes.
Spell out correctly or clarify the roles of each team member in the AI life cycle.
Attractive section of content. I just stumbled upon your blog and in accession capital to assert that I get actually enjoyed account your blog posts. Anyway I will be subscribing to your augment and even I achievement you access consistently fast.
Attractive section of content. I just stumbled upon your blog and in accession capital to assert that I get actually enjoyed account your blog posts. Anyway I will be subscribing to your augment and even I achievement you access consistently fast.
Great, Thank you Toby