项目作者: msetzu

项目描述 :
A scoring system for explainability
高级语言: HTML
项目地址: git://github.com/msetzu/rule-relevance-score.git
创建时间: 2019-05-31T21:05:56Z
项目社区:https://github.com/msetzu/rule-relevance-score

开源协议:MIT License

下载


RRS ~ Rule Relevance Score

Explanations come in two forms: local, explaining a single model prediction, and global, explaining all model predictions. The Local to Global (L2G) problem consists in bridging these two family of explanations. Simply put, we generate global explanations by merging local ones.

You can find a more detailed explanation in the conference poster.

The algorithm

Local and global explanations are provided in the form of decision rules:

  1. age < 40, income > 50000, status = married, job = CEO grant loan application

This rule describes the rationale followed given by an unexplainable model to grant the loan application to an individual younger than 40 years-old, with an income above 50000$, married and currently working as a CEO.


Setup

  1. git clone https://github.com/msetzu/rule-relevance-score/
  2. cd rule-relevance-score

Dependencies are listed in requirements.txt, a virtual environment is advised:

  1. mkvirtualenv rule-relevance-score # optional but recommended
  2. pip3 install -r requirements.txt

Running the code

Python interface

  1. from tensorflow.keras.models import load_model
  2. from numpy import genfromtxt, float as np_float
  3. import logzero
  4. from scoring import Scorer
  5. from models import Rule
  6. # Set log profile: INFO for normal logging, DEBUG for verbosity
  7. logzero.loglevel(logzero.logging.INFO)
  8. # Load black box: optional! Use black_box = None to use the dataset labels
  9. black_box = load_model('data/dummy/dummy_model.h5')
  10. # Load data and header
  11. data = genfromtxt('data/dummy/dummy_dataset.csv', delimiter=',', names=True)
  12. features_names = data.dtype.names
  13. tr_set = data.view(np_float).reshape(data.shape + (-1,))
  14. # Load local explanations
  15. local_explanations = Rule.from_json('data/dummy/dummy_rules.json', names=features_names)
  16. # Create a RRS instance for `black_box`
  17. scorer = Scorer('rrs', oracle=black_box)
  18. # Fit the model
  19. coverage_weight = 1.
  20. sparsity_weight = 1.
  21. scores = scorer.score(local_explanations, tr_set,
  22. coverage_weight=coverage_weight,
  23. sparsity_weight=sparsity_weight)

You can filter rules with the static method scoring.Scorer.filter():

  1. # filter by score percentile
  2. filtered_rules = Scorer.filter(rules, scores, beta=beta)
  3. # filter by score
  4. filtered_rules = Scorer.filter(rules, scores, alpha=alpha)
  5. # filter by maximum length
  6. filtered_rules = Scorer.filter(rules, scores, max_len=max_len)
  7. # filter by maximum number
  8. filtered_rules = Scorer.filter(rules, scores, gamma=gamma)
  9. # filter by any combination
  10. filtered_rules = Scorer.filter(rules, scores, beta=beta, gamma=gamma)
  11. filtered_rules = Scorer.filter(rules, scores, alpha=alpha, max_len=max_len)

You can directly validate the model with the built-in function validate:

  1. from evaluators import validate
  2. validation_dic = validate(rules, scores, oracle=oracle,
  3. vl=tr_set,
  4. scoring='rrs',
  5. alpha=0, beta=beta,
  6. gamma=len(rules) if gamma < 0 else int(gamma),
  7. max_len=inf if max_len <= 0 else max_len)

Command line interface

You can run RRS from the command line with the following syntax:

  1. python3 api.py $rules $TR $TS --oracle $ORACLE --name $NAME

where $rules is the rules file; $TR and $TS are the training and (optionally) validation sets; $ORACLE is the path to your black box (in hdf5 format) and $NAME is the name of your desired output file containing the JSON validation dictionary.
If you are interested to check formatting, the folder data/dummy/ contains a dummy input example.
You can customize the run with the following options:

  • -o/--oracle path to the black box to use, if any.
  • -m/--name base name for the log files
  • -s/--score type of scoring function to use. rrs, coverage, fidelity are valid functions.
  • -p/--wp coverage weight. Defaults to 1.
  • -p/--ws sparsity weight. Defaults to 1.
  • -a/--alpha hard pruning threshold. Defaults to 0.
  • -b/--beta percentile pruning threshold. Defaults to 0.
  • -g/--gamma pruning factor. Keep at most gamma rules.
  • -l/--max_len pruning factor. Keep rules shorter than max_len.
  • --debug=$d to set the logging level

This documentation is available by running --help on the running script:
```shell script
$ python3 api.py —help
Usage: api.py [OPTIONS] RULES TR

Options:
-vl TEXT Validation set, if any.
-o, —oracle TEXT Black box to predict the dataset labels, if any.
Otherwise use the dataset labels.

-m, —name TEXT Name of the log files.
-s, —score TEXT Scoring function to use.Available functions are ‘rrs’
(which includes fidelity scoring) and
‘coverage’.Defaults to rrs.

-p, —wp FLOAT Coverage weight. Defaults to 1.
-p, —ws FLOAT Sparsity weight. Defaults to 1.
-a, —alpha FLOAT Score pruning in [0, 1]. Defaults to 0 (no pruning).
-b, —beta FLOAT Prune the rules under the beta-th percentile.Defaults
to 0 (no pruning).

-g, —gamma FLOAT Maximum number of rules (>0) to use. Defaults to -1 use
all.

-l, —max_len FLOAT Length pruning in [0, inf]. Defaults to -1 (no
pruning).

-d, —debug INTEGER Debug level.
—help Show this message and exit.

  1. ## Run on your own dataset
  2. RRS has a strict format on input data. It accepts tabular datasets and binary classification tasks. You can find a dummy example for each of these formats in `/data/dummy/`.
  3. #### Rules [`/data/dummy/dummy_rules.json`]
  4. Local rules are to be stored in a `JSON` format:
  5. ```json
  6. [
  7. {"22": [30.0, 91.9], "23": [-Infinity, 553.3], "label": 0},
  8. ...
  9. ]

Each rule in the list is a dictionary with an arbitrary (greater than 2) premises. The rule prediction ({0, 1}) is stored in the key label. Premises on features are stored according to their ordering and bounds: in the above, "22": [-Infinity, 91.9] indicates the premise “feature number 22 has value between 30.0 and 91.9”.

Black boxes [/data/dummy/dummy_model.h5]

Black boxes (if used) are to be stored in a hdf5 format if given through command line. If given programmatically instead, it suffices that they implement the Predictor interface:

  1. class Predictor:
  2. @abstractmethod
  3. def predict(self, x):
  4. pass

when called to predict numpy.ndarray:x the predictor shall return its predictions in a numpy.ndarray of integers.

Training data[/data/dummy/dummy_dataset.csv]

Training data is to be stored in a csv, comma-separated format with features names as header. The classification labels should have feature name y.


Docs

You can find the software documentation in the /html/ folder and a conference poster on RRS can be found here.

The work is has been published as a conference poster at XKDD 2019 at the joint ECML/PKDD 2019 conference.