Paul Kiage commited on
Commit
79e1434
1 Parent(s): d501dd7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -4,10 +4,36 @@
4
 
5
  An interactive tool demonstrating credit risk modelling.
6
 
 
 
 
 
 
7
  ## Built With
8
 
9
  - [Streamlit](https://streamlit.io/)
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  # References
12
 
13
  ## Inspiration:
@@ -28,7 +54,7 @@ An interactive tool demonstrating credit risk modelling.
28
  > "(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons."
29
 
30
  [Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence](https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682)
31
- > High-risk AI systems will be subject to strict obligations before they can be put on the market:
32
  >* Adequate risk assessment and mitigation systems;
33
  >* High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
34
  >* Logging of activity to ensure traceability of results;
 
4
 
5
  An interactive tool demonstrating credit risk modelling.
6
 
7
+ Emphasis on:
8
+ * Building models
9
+ * Comparing techniques
10
+ * Interpretating results
11
+
12
  ## Built With
13
 
14
  - [Streamlit](https://streamlit.io/)
15
 
16
+ # Roadmap
17
+ Models:
18
+ - [ ] Add LightGBM
19
+ - [ ] Add Adabost
20
+ - [ ] Add Random Forest
21
+
22
+ Visualization:
23
+ - [ ] Add decision surface plot(s)
24
+
25
+ Documentation:
26
+ - [ ] Add getting started and usage documentation
27
+ - [ ] Add documentation evaluating models
28
+ - [ ] Add design rationale(s)
29
+
30
+ Other:
31
+ - [ ] Deploy app
32
+ - [ ] Add csv file data input
33
+ - [ ] Add tests
34
+ - [ ] Add test/code coverage badge
35
+ - [ ] Add continuous integration badge
36
+
37
  # References
38
 
39
  ## Inspiration:
 
54
  > "(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons."
55
 
56
  [Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence](https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682)
57
+ > "High-risk AI systems will be subject to strict obligations before they can be put on the market:
58
  >* Adequate risk assessment and mitigation systems;
59
  >* High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
60
  >* Logging of activity to ensure traceability of results;