Reading for gender bias
We are updating our language because we are now working to include bias related to race and ethnicity. Some of the code/commands as well as the project title will still reflect the original project which focused on gender bias. The goal is to continue to update this tool as new ways of identifying ALL forms of bias are recognized.
Promote equity by identifying potential bias in letters of recommendation and evaluations
Autocorrect for bias
Implicit bias in evaluations negatively affects individuals at every stage of their career. The goal of this project is to create a web-based text analysis tool that scans and reveals language bias associated with evaluations and letters of recommendation. The tool will provide a summary of potential changes to the writer to help them remove bias. The hope is that by bringing awareness to the existence of implicit bias, we can change how evaluations and letters are drafted and judged, thereby providing a concrete way to tackle disparities related to gender, race, and ethnicity.
Thank you for visiting the Reading for Bias project!
This document (the README file) introduces you to the project. Feel free to explore by section or just scroll through.
So, even if someone wants to write a really strong letter, they will probably include language that reflects implicit bias, which weakens the letter.
Reading for Bias is a web-based text analysis tool that:
This document is currently a work-in-progress; please feel free to ask for clarification in the Issues tab of this repository, or on our slack workspace (details below).
Currently, the most reliable way to download and start using this code is to clone it from this repository and install it using pip:
git clone https://github.com/gender-bias/gender-bias
cd gender-bias
pip3 install -e .
NOTE: The last line in the above snippet installs this library in “editable” mode, which is probably fine while the library is in a state of flux.
This installation process will add a new command-line tool to your PATH, called genderbias
.
To install the dependencies, run:pip3 install -r requirements.txt
genderbias -h
usage: genderbias [-h] [--file FILE] [--json] [--list-detectors]
[--detectors DETECTORS]
CLI for gender-bias detection
optional arguments:
-h, --help show this help message and exit
--file FILE, -f FILE The file to check
--json, -j Enable JSON output, instead of text
--list-detectors List the available detectors
--detectors DETECTORS
Use specific detectors, not all available
You can probably ignore most of these options when getting started.
There are two ways to check a document:
This option streams a file from stdin and writes its suggestions to stdout. You can use it like this:
cat my-file.txt | genderbias
If you don’t have a text file handy, you can try it out on one of ours:
cat ./example_letters/letterofRecW | genderbias
The tool will print its suggestions out to stdout:
Effort vs Accomplishment
[516-527]: Effort vs Accomplishment: The word 'willingness' tends to speak about effort more than accomplishment. (Try replacing with phrasing that emphasizes accomplishment.)
[2915-2926]: Effort vs Accomplishment: The word 'willingness' tends to speak about effort more than accomplishment. (Try replacing with phrasing that emphasizes accomplishment.)
[3338-3347]: Effort vs Accomplishment: The word 'dedicated' tends to speak about effort more than accomplishment. (Try replacing with phrasing that emphasizes accomplishment.)
[3492-3502]: Effort vs Accomplishment: The word 'commitment' tends to speak about effort more than accomplishment. (Try replacing with phrasing that emphasizes accomplishment.)
[3524-3533]: Effort vs Accomplishment: The word 'tenacious' tends to speak about effort more than accomplishment. (Try replacing with phrasing that emphasizes accomplishment.)
[3706-3716]: Effort vs Accomplishment: The word 'commitment' tends to speak about effort more than accomplishment. (Try replacing with phrasing that emphasizes accomplishment.)
SUMMARY: This document has a high ratio (6:1) of words suggesting effort to words suggesting concrete accomplishment.
If you’d rather that the tool print its suggestions to another file, you can use the following:
cat ./example_letters/letterofRecW | genderbias > edits-to-made.txt
This functionality is EXACTLY the same; just a matter of how you prefer to run the tool!
genderbias -f ./example_letters/letterofRecW
The -f
or --file
flag can be used to specify a file.
The output of this tool is a character-index span that you can think of as “highlighting” the problematic (or potentially-problematic) text. Our intention is to add a more human-readable form as well; if you’re interested in helping develop that capability, please get in touch!
The tool can also be run as a REST server in order to operate on text sent from a front-end — for example, our client-side website. To run the server, run the following:
genderbias-server
This will start a Flask server listening on port 5000.
To use this server, send a POST requests to the /check
endpoint, with a JSON body of the following form:
{
"text": "My text goes here"
}
For example, in Python, using requests
:
import requests
response = requests.post(
"http://localhost:5000/check",
headers={"Content-Type": "application/json"},
json={"text": "this is my text"}
)
print(response.json())
The response is JSON of the form:
{
"issues": List[genderbias.Issue],
"text": <the same text you sent, for reference>
}
Mollie is a medical student and neuroscientist who would like to make the world a better place.
The development of this project is mentored by Jason as part of Mozilla Open Leaders and started in 2018.
So glad you asked! WooHoo!
Help in any way you can!
We need expertise in coding, web design, program development, documentation, and technical writing. We’re using Python for the text analysis. I’ve created issues around different rules/signals to search for in letters. Example letters can be found here.
If you think you can help in any of these areas or in an area I haven’t thought of yet, please check out our contributors’ guidelines and our roadmap.
The goal of this project is to promote equity, so we want to maintain a positive and supportive environment for everyone who wants to participate. Please follow the Mozilla Community Participation Guidelines in all interactions on and offline. Thanks!
If you want to report a problem or suggest an improvement, please open an issue at this github repository. You can also reach Mollie by email (marmo@ohsu.edu) or on twitter.
Studies on bias related gender, race, and ethnicity show that letters/evaluations written for women and persons excluded because of their ethnicity or race (PEERs) are:
“Tackling gender bias in text” https://drive.google.com/file/d/1--Gu_mcHssy7KLPePSvNExiOQt8Emmur/view
Publications https://sites.google.com/view/biaslyai/about/publications?authuser=0
“A tool to calculate gender-bias in recommendation letters based on an implementation by Thomas Forth”
https://github.com/slowe/genderbias
Demo https://slowe.github.io/genderbias/
http://www.rebeccakreitzer.com/bias/
https://csw.arizona.edu/sites/default/files/avoiding_gender_bias_in_letter_of_reference_writing.pdf
H/t from https://twitter.com/pollyp1/status/1040646167305113600 for more discussions
https://aaberhe.files.wordpress.com/2019/03/avoiding-racial-bias-in-reference-writing.pdf