halid
stringlengths
8
12
lang
stringclasses
1 value
domain
sequencelengths
1
7
timestamp
stringlengths
19
19
year
stringclasses
49 values
url
stringlengths
43
389
text
stringlengths
908
2.18M
size
int64
908
2.18M
authorids
sequencelengths
1
102
affiliations
sequencelengths
1
229
01492820
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01492820/file/978-3-642-40779-6_16_Chapter.pdf
Xiaofeng Xia email: xiaofeng.xia@h-its.org An Equivalent Access Based Approach for Building Collaboration Model between Distinct Access Control Models Keywords: collaboration model, distinct access control models, equivalent access, mapping set, linking set des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Introduction When several organizations want to make a collaboration, they could share resources among each other such that some common tasks can be completed. The collaboration pattern discussed in this paper refers to that for the resources shared from participating organization domains, the collaboration domain can have its own access control model. Practical security policy configurations tell us that the security models or policies are not all-purpose. To protect their resources from unauthorized access the organization domains adopt different access control models, e.g. RBAC [START_REF] Sandhu | Role-based access control models[END_REF], mandatory access control (MAC) [START_REF] Bell | Secure computer systems: Mathematical foundations and model[END_REF], and discretionary access control (DAC) [START_REF] Osborn | Configuring Role-Based Access Control to Enforce Mandatory and Discretionary Access Control Policies[END_REF]. These models have different model entities related to permissions, which we call core model semantics. For example RBAC model constructs roles, MAC model has security labels. Currently there are approaches, e.g. in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] and [START_REF] Joshi | Secure Interoperation in a Multidomain Environment Employing RBAC Policies[END_REF], focusing on RBAC, which assume that all organizations adopt RBAC model, then build a global access control policy on role mappings. A global policy can be generated, because all domains have the same core model semantics, however if the domains use distinct access control models, role mapping and global policy can not be built on these models. Organizational collaboration also introduces the IDRM problem in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF], which means to find out the minimal role set covering requested permissions from collaboration domain. This problem can then be generalized to distinct models and be defined as finding out an "appropriate" set of core model semantics covering requested permission set. The third problem for organization collaboration is constraints transforming. As the model entities are mapped between domains, from the perspective of participators there are some constraints that must also be held in collaboration domain, e.g. for RBAC model, the separation of duty constraint(SSD) [START_REF]Incits: ANSI INCITS 359-2004 for information technology role based access control[END_REF]. Therefore in this paper our contributions are [START_REF] Kalam | Organization based access control[END_REF]building a collaboration model between distinct access control models; [START_REF] Kalam | Access control for collaborative system: a web services based approach[END_REF]the necessary algorithms of figuring out an appropriate set of core model semantics to requested permission set; (3)constraints transforming between distinct models. The rest sections of this paper are organized as following: section 2 describes related work, in section 3 we present a new collaboration model based on equivalent access and section 4 illustrates the supporting algorithms and methods on building the collaboration model. Our testing and comparison results to algorithms is presented in section 5. Finally we have the conclusion of this paper in section 6. Related Work The RBAC [START_REF] Sandhu | Role-based access control models[END_REF] [START_REF] Sandhu | The ARBAC97 Model for Role-Based Administration of Roles[END_REF] model provides role-permission management, role hierarchy, and separation of duty constraints. For Lattice Based Access Control(LBAC) or MAC model [START_REF] Sandhu | Lattice based access control[END_REF], the information flow is restricted by the constraints on security labels and clearances. DAC [START_REF] Osborn | Configuring Role-Based Access Control to Enforce Mandatory and Discretionary Access Control Policies[END_REF] model emphasizes owning relationships of resources and permission delegation to be the way of authorization. In the past years RBAC is the most concerned model due to the of its conforming to organization structure. An context-dependent RBAC model [START_REF] Wolf | Context-Dependent Access Control for Web-Based Collaboration Environments with Role-Based Approach[END_REF] is proposed to enforce access control in webbased collaboration environments. Organization based access control(OrBAC) [START_REF] Kalam | Organization based access control[END_REF] is constructed from a RBAC model as concrete level, and OrBAC then refers to common organizational contextual entities as abstract level. Based on OrBAC, PolyOrBAC [START_REF] Kalam | Access control for collaborative system: a web services based approach[END_REF] is proposed to implement the collaboration between organizations having OrBAC model in their domains. It takes advantage of abstract organizational entities and Web Services mechanisms, e.g. UDDI, XML, SOAP, to enforce a global framework of collaboration for engaging organization domains. Role mapping [START_REF] Joshi | Secure Interoperation in a Multidomain Environment Employing RBAC Policies[END_REF] helps one domain obtaining accesses to resources from other domains by role-inheritances across domains. A global access control policy is specified to merge the engaging organization's local policies. This approach also assumes that all domains adopt RBAC model. Due to these contributions on RBAC and collaboration, we start to focus on organization domains with distinct access control models. The other work on collaboration, or inter-domain operation, refers to IDRM problem. In [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF], the proposed greedy-search based algorithm is an approximate solution to IDRM problem, however simple greedy-search has local-maxima problem, therefore a prob-ability based greedy-search algorithm in this paper is used to avoid local-maxima and get better approximate solution. In section 5 we will discuss the problems of these approaches comparing with ours. To improve the algorithms, [START_REF] Chen | Inter-domain role mapping and least privillege[END_REF] presents another idea on greedy-search, they note that the assumption of IDRM problem should be more complex and practical. IDRM problem can be reduced to a weighted-set cover problem instead of minimal set cover problem in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF]. However the algorithm by [START_REF] Chen | Inter-domain role mapping and least privillege[END_REF] can not avoid local-maxima either. 3 Equivalent Access and Collaboration Model -User, Resource, and Action: the sets of system users, resources, and operations on resources; -T : the set of Tag objects, e.g. roles, security labels; As we constructed DAC model with a role based way [START_REF] Osborn | Configuring Role-Based Access Control to Enforce Mandatory and Discretionary Access Control Policies[END_REF], we view the model semantics as Tag objects, i.e. both role and label object can be instantiated by a Tag class which has at least two attributes: < type, name >. -P ermission ⊆ Resource × Action, a set of permissions; And some predicates and functions: -Reslabel(Resource, T ): the assignment relation between a resource and a security label; -mayAccess(U ser, Resource, Action): a common predicate indicating an access request from some user to make an operation on some resource. -U sertag : U ser × T , is to indicate the relation that certain Tag objects (roles or labels) are assigned to some user. -P S : T → P ermission: the permissions assigned or held by a Tag object; -P R : P ermission → T : the tags holding current permission; collaboration model based on equivalent access Any access request of a user to some resource can be enforced by different access control models. We introduce "equivalent access", which is related to two domain's access control policies. Since in organizational collaborations the preliminary goal is to find appropriate resources, equivalent access refers to that a user's access to some resource under collaboration domain policy has equivalent evaluation results as that under participating domain policy. Equivalent access should be the preliminary goal of organization collaborations, i.e. the constructing process of collaboration is to find equivalent accesses for the required resources in participating domains. The collaboration scenario we discussed here refers to a collaboration domain, denoted as D c , and a series of original domains, i.e. (D c ; D 1 , . . . , D n ), n ≥ 2. Each domain applies own access control model and policy. For a collaboration group (D c ; D 1 , . . . , D n ), there exists two sorts of entity relations between collaboration domain (D c ) and other participating domains (D i , i∈ [1, n]); one is the entity mapping set, the other is the entity linking set. We denote the former as Q, which maps the entities of D i onto those of D c simply. The mapping means that for any resource e 0 ∈ D i , it has a corresponding virtual resource e 0 ∈ D c . The mappings are classified into "user", "resource" and "action", i.e. {ζ u , ζ e , ζ a }. Another relation is entity linking set, denoted as L, which need to be computed and will be introduced in following parts. Definition 1 For a collaboration group (D c ; D 1 , . . . , D n ), considering any participating domain D i , i ∈ [1, n] and its mapping set Q, there are u ∈ user c and e ∈ resource c , as well as u ∈ user i and e ∈ resource i , such that < u, u >∈ ζ u 1 and < e, e >∈ ζ e <Di,Dc> ; we say that the access by u to e is equivalent to that by u to e under two policies P c and P i , if for the substitutions θ Dc = {U x /u, E x /e, A x /read} and θ Di = {U x /u , E x /e , A x /read } and: P c |= mayAccess(U x , E x , A x ) θ Dc ∧ P i |= mayAccess(U x , E x , A x ) θ Di (1) Then the equivalent access is denoted as: mayAccess(U x , E x , A x ) θ Dc , θ Di | <Pc,Pi> (2) Γ = n i=1 < Q <Dc,Di> , L <Dc,Di> > (4) Building Collaboration Model Between Distinct Access Control Models In this section we will analyze the problems of building a collaboration model, then introduce the algorithms we use to build the model, as well as the methods to transfer constraints into collaboration domain. According to the definition of our new collaboration model, there are basically 3 steps to enforce:(1)finding out equivalent accesses;(2)try to minimize the scale of disclosure of the organization's policy information involved into collaboration;(2)domain constraints should be transferred into collaboration domain by configuring them on the policy entities in collaboration domain. RBAC as participator's model Minimal role set covering requested permissions A greedy-search based algorithm (GSA) is proposed to get a solution to IDRM problem(NP-complete) in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF]. Basically the algorithm handles each candidate role with taking all its permissions that can cover as much as possible target permissions, then put this role into solution set. [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] also provides another probabilistic-greedy-search algorithm (IGSA-PROB) which executes candidate role handling with probability p (near 1). Greedy-search based algorithm however does not guarantee to find the optimal solution R . It is an H n -approximation algorithm for IDRM problem. The IDRM approaches proposed in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] hence has the following problems:(1)the GSA algorithm is non-terminating and will probably not find any solution;(2)the GSA algorithm has local-maxima problem ;(3)the IGSA-PROB algorithm searches with probability p, while the local-maxima problem cannot be effectively avoided;(4)the inheritance hierarchy of roles can be applied to the IDRM problem. The GSA and IGSA-PROB algorithms select only the roles which have permissions as a subset of required permission set to be candidates. Thus it makes the algorithms non-terminating. We build collaboration model by entity mapping and linking sets. The entity mapping set ensures that only the requests involving mapped entities will be allowed, which means that even if a role r is linked into, but only the mapped permissions will be allowed. This enables our algorithm to terminate. Towards solving IDRM problem we propose three algorithms, the input of them includes RQ, requested permission set; R, set of all roles; P , set of all permissions; R S , set of initially selected roles; in turn the output has T S, set of candidate roles. They are specified formally in appendix. I. Improved GSA algorithm (IGSA) (1) finding out all the roles from R, which have intersected permissions with requested RQ, and put them into R S . (2) for a role r in set R s , if r's permission set covers larger part of RQ than any other roles in R s , then put r into candidate set T S, and remove r from R s as well as remove the covered permissions of r from RQ. (3) if RQ is not empty, then go to step (2). II. Improved algorithm for local-maxima (IGSAL) (1) for each permission finding out those which are assigned to a single role r. (2) for the other roles in R, remove the permissions assigned to them, but also assigned to the role r. (3) comparing each role r with all of the other roles, if one of the permissions of r belongs to another role r * and r * has more permissions than r , then remove all of the overlapped permissions from r . (4) if the permissions of r are all removed, then r should also be removed from R. (5) performing the steps of algorithm I to compute candidate set T S. III. Algorithm for hierarchical roles (HCHY) (1) initially put the roles which have no parent roles, into set S 1 , remove them from their child roles' parents list, then make a new set S 2 . (2) for each role r in R, if it has no parent roles and it dose not belong to S 1 and S 2 , and if the convergent class set Converg Classes is empty, then make a new convergent class set and add r into it; if Converg Classes is not empty, then check every convergent class set C in it, if the current role r belongs to the child role set of any role in C, add r into C. (3) remove r from the parent role set of each child role of r, add r into S 2 . (4) make a new setS 3 ; for S 3 and each permission p of P , make another new set S 4 , thus for each role r which holds p, if there is a convergent class set C containing r , add r to S 4 . ( 5) after checking all of the roles having p, add S 4 into S 3 ; make new sets S 5 and S 6 . ( 6) by a recursive process "recurse", compute the combinations of sets in S 3 and return the minimal combination results. Constraints of participating domain Figuring out the minimal set roles covering requested permissions is the first step to enable the collaboration process, in addition, we must see that some RBAC constraints should also be held in collaboration domain. Here we focus on the static separation of duty constraint(SSD), which is defined as the following statements where "assigned user(r)" indicates the set of users holding the role "r", and "assigned tag(u)" indicates the set of roles being assigned to user "u" [START_REF]Incits: ANSI INCITS 359-2004 for information technology role based access control[END_REF]. -SSD ⊆ (2 R × N ), R P SSD = {s 1 , s 2 , ..., s k }, k = C n |rs| . When the participating domain adopts RBAC model, the collaboration domain has also RBAC or DAC model (our DAC model is built by a "role" based way), it is necessary to note that there are 3 new constraints setting for collaboration domain's policy. They refer to in collaboration domain: (1) none of the "Tag" objects can have the whole permissions related to anyone of the SSD elements; (2) no user's permissions can cover the whole permissions related to one SSD element; (3)if the collaboration domain has RBAC model, then configuring new SSD constraints from the role sets which have the requested permissions. The 3 constraints are formally defined as the following statements. When the collaboration domain has MAC model, then only the constraint <1> should be held, since in MAC model each user holds one security label. Each member of P SSD will be mapped to corresponding permission sets s i , i ∈ [1, k] in collaboration domain and the permission sets accordingly to P SSD in collaboration domain. <1> ∀s i ∈ P SSD ∀t ∈ T Dc . s i P S(t). <2> ∀s i ∈ P SSD ∀t ∈ T Dc ∀u ∈ U Dc ∀l ∈ assigned tag(u). s i P S(l ) where g = |s i | ≥ 1, s i = {p j |j ∈ [1, g]} <3> ∀ < rs c , m >∈ SSD s i ∀t ∈ rs c .|t | ≥ m → l∈t assigned user(l) = φ where rs c ⊆ T Dc ∧ o d ⊆ rs c ∧ m = |o d | ∧ SSD s i = {o d |o d = {r 1 s , r 2 s , ..., r g s }} MAC as participator's model If the participating domain adopts a mandatory access control model, then a resource has exactly one label. When the requested resources and operations are confirmed, these resources can be simply mapped onto different security labels to which they are assigned in participating domain. In this section we discuss on the Bell Lapadula model (BL) [START_REF] Bell | Secure computer systems: Mathematical foundations and model[END_REF][6] in collaboration, and the other Biba model is about integrity, which is dual to BL model. The MAC model assigns for each object exactly one security label and for each user or subject only one security clearance. Comparing with the scenario where RBAC as participator's model, we only need to find out the labels of resources lying in the requested permissions, then these labels can provide equivalent accesses. To prevent disallowed information flow in collaboration domain, additional constraints must be added to collaboration domain policies. Since finding out the labels of resources is trivial, we provide only the definition of newly created constraint in collaboration domain. Assuming that a collaboration model Γ and one of the participating domains D i and the collaboration domain D c are defined as in section 3. Single label constraint <2> ∀u ∈ U Dc ∀l, r ∈ T Dc .U sertag(u, l) ∧ U sertag(u, r) → (l = r) <3> T = {l|∀u ∈ U Dc .U sertag(u, l)} ∀l ∈ T ∃t ∈ T Di .P l ⊆ RQ ∧ {t|∀ < e, a >∈ P l ∧ Reslabel(e, t)} = {t} In the collaboration domain, the information flow policy of participating domain should be held. Single label constraint will make restrictions on the labels of the resources which are shared in collaboration domain. Each "Tag" object can be assigned with the permissions, whose mapping entities in participating domain have the same security label. Each user or subject in collaboration domain can have either only one "Tag" object or multiple "Tag" objects which are assigned with the permissions related to same security label. Therefore the above constraint is expressed with the following formula: <1> ∧ (<2> ∨ <3>). DAC as participator's model In a collaboration process, if the required permissions are provided from a participating domain with DAC model, the delegation of these permissions will not be considered in collaboration domain, since only the access permissions are necessary, while not the delegation permissions. In our DAC model definitions, resource and different operations construct permissions for which different roles are created. Each resource has an owner, who is assigned "owner role" of the resource. The "owner role" inherits all of the permissions from other relevant roles. Participating domain only needs to provide the basic roles which are related to the requested permissions. Although our DAC model adopts a "role" based way, in DAC model, there is no high level roles which hold large number of permissions related to different resources. Thus the previous algorithm of finding minimized role set for requested permissions will not be applied in DAC model. In participating domain with DAC model, there are no special constraints to be ensured in collaboration domain. Analysis on Algorithm Properties and Testing Results We present algorithms IGSA, IGSAL, and HCHY for handling minimal role set problem in section 5. Our collaboration model Γ verifies the entity mapping and linking sets, by which it is helpful to introduce non-required permissions. Only the collaboration relevant permissions, that is, the resources and operations are kept as entity mappings in collaboration model Γ , can be allowed for access. As discussed in [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF], the GSA has local-maxima problem and can be solved by GSA-PROB (probability based greedy search algorithm). By analyzing the problem we found that the permission assignment relationship, i.e. one permission assigned to multiple roles, causes local-maxima problem. Our IGSAL algorithm tries to remove this "multiinheritance" from the role-permission relation, then the greedy search can be applied to resulted roles and permissions. To describe the complexity characteristics of these 3 algorithms, we assume that the size of requested permissions is N . Comparing with IGSA and GSA-PROB algorithms, IGSAL spends computation on preprocessing the role-permission relations, then starts a greedy search to obtain solution. However on efficiency of algorithm, IGSAL has a nested loop for checking all of the requested permissions, which makes a O(N 2 ) complexity. Since the complexity of greedy search referring to IGSA and GSA-PROB is O(lnN ) [START_REF] Du | Supporting authorization query and inter-domain role mapping in presence of hybrid role hierarchy[END_REF] and the second step of IGSAL is also greedy search, the final complexity of IGSAL is still O(N 2 ). By randomly generating permissions and the assignment relationships, a testing for handling 100 roles and 43000 50000 permissions and the size of requested permission ranges from 1000 to 15000. Table 1 shows that IGSAL is less efficient than IGSA, but more precise. It is mentioned that the role hierarchy can be used to provide minimal role set for requested permissions. The collaboration model can ensure that only mapped and linked entities related permissions can be allowed to access, even if there is a high level role is involved and has more permission than requested. Therefore from one or multiple role hierarchies in an organization domain one can find out the powerful roles to cover as much as possible requested permissions. The hierarchies discussed in section 4 is called convergent classes. The algorithm HCHY computes firstly the convergent classes of roles contained in an access control model, which will make a time consuming with complexity O(C 1 ). C 1 indicates that a constant time consuming on convergent classes, since the roles and role hierarchies in a domain has already been determined in advance. It is only necessary to compute it once. The second step of HCHY algorithm is to input the requested permissions, which takes time complexity O(N ). Finally we need to figure out by a recursive process the minimal set of roles covering requested permissions, which is only related to the size of roles, hence the complexity of this process varies by the number of involved role hierarchies, assuming C 2 . The total time complexity of HCHY on requested permissions is O(N ) + C 1 + C 2 . By Table 2 we can see that HCHY is faster than IGSA. In an organization domain with RBAC model, it adopts flat role structure or hierarchical role structure. our algorithms IGSA, IGSAL, and HCHY can handle and make use of both of these role structures. In this paper we handle 3 problems in organizational collaboration: (1)a secure collaboration is built between the domains with the distinct access control models (2)finding out an "appropriate" set of core model semantics covering requested permission set (3)constraints transforming between organization and collaboration domains. we present an equivalent access based approach and introduce a mediator involved collaboration pattern for the first problem. New algorithms are in turn proposed for IDRM problem based on flat and hierarchical role structures. Then some new constraints are presented for the third problem. Finally we analyzes our algorithms and present the testing and comparison results with existed approaches. The collaboration pattern with "mediator" works for both situations that there is a or there is no domain access control model in collaboration. The access control policies of participating domains are respected. In our future work, we will implement the mediator role, the collaboration model, and transformed constraints in XACML. 3. 1 1 Preliminary definitionsAn organization domain or collaboration domain D should contain part of the following entity sets and relations: Definition 2 3 ) 3 233 The elements of entity linking set indicate the pairs of related "Tag" objects respectively from collaboration(D c ) and original(D i ) domains. When two substitutions towards their own policies P c and P i have equivalent access, a set S Dc indicates the "Tag" objects which satisfy the request by θ Dc and a set S Di indicates those by θ Di , then the entity linking set L <Di,Dc> is defined as following rule: L <Di,Dc> = {< r, l > | < r, l >∈ S Dc × S Di }. (Definition For a collaboration group (D c ; D 1 , . . . , D n ), where all domain's model has the form of {D R , D M , D S }, considering any original domain D i , i ∈ [1, n] and its mapping set Q <Dc,Di> with D c , the collaboration model Γ of the group will be defined by the above definitions of organization domain as a union of pairs: is the set of roles and N is the set of natural numbers.-∀ < rs, n >∈ SSD∀t ⊆ rs.|t| ≥ n → r∈t assigned user(r) = φNow we know that rs is a set related to SSD, the possible "n-tuple" sets from rs is C n |rs| , which means the possibilities of picking n elements from |rs| elements. For each possibility we define the set s k of all involved permissions, thus C n |rs| sets are defined as the following statements, where P SSD indicates the permission sets for each of the SSD constraint elements in participating domain: ∀r 1 , r 2 , ..., r n ∈ rs. s k = n i=1 P S(r i ) <1> P r = {< e, a > |∀e ∈ Resource Dc ∃e ∈ Resource Di .r ∈ T Dc ∧ < e , a > ∈ P S(r)∧ < e , e >∈ ζ e }. P r ⊆ RQ ∧ | {l|∀ < e, a >∈ P r ∧ Reslabel(e, l)}| = 1. Table 1 . 1 Comparison of IGSA and IGSAL on efficiency Role size Perm size Requested perms Time consuming(IGSA/IGSAL) Solution size(IGSA/IGSAL) 100 41613 10 3 71 / 5334 80 / 78 100 45807 2 × 10 3 79 / 14549 90 / 87 100 46055 3 × 10 3 90 / 23011 91 / 91 100 43696 4 × 10 3 104 / 31864 93 / 89 100 45252 5 × 10 3 113 / 43066 96 / 95 100 44701 6 × 10 3 121 / 54115 98 / 96 100 48191 7 × 10 3 193 / 81417 99 / 97 100 44323 8 × 10 3 143 / 84534 99 / 99 100 45879 9 × 10 3 221 / 109845 98 / 97 100 43841 10 4 164 / 110684 97 / 95 100 47209 11 × 10 3 243 / 161712 98 / 98 100 45088 12 × 10 3 266 / 161768 99 / 98 100 46269 13 × 10 3 269 / 188546 100 / 98 100 44134 14 × 10 3 300 / 197264 98 / 97 100 44036 15 × 10 3 299 / 217346 99 / 97
27,964
[ "1004370" ]
[ "489435" ]
01492828
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01492828/file/978-3-642-40779-6_23_Chapter.pdf
Sachar Paulus email: paulus@fh-brandenburg.de Nazila Gol Mohammadi Thorsten Weyer Trustworthy Software Development Keywords: Software development, Trustworthiness, Trust, Trustworthy software, Trustworthy development practices This paper presents an overview on how existing development methodologies and practices support the creation of trustworthy software. Trustworthy software is key for a successful and trusted usage of software, specifically in the Cloud. To better understand what trustworthy software applications actually mean, the concepts of trustworthiness and trust are defined and put in contrast to each other. Furthermore, we identify attributes of software applications that support trustworthiness. Based on this groundwork, some wellknown software development methodologies and best practices are analyzed with respect on how they support the systematic engineering of trustworthy software. Finally, the state of the art is discussed in a qualitative way, and an outlook on necessary research efforts and technological innovations is given. Introduction In the last years, many attempts have been made to overcome the issue of insecure and untrusted software. A number of terms have been used to catch the expectation on how "solid" a piece of software should be: secure, safe, dependable and trusted. Only in recent years literature related to (secure) software developments has seen the introduction of socio-technical systems (STS) (for more details, see [START_REF] Gol Mohammadi | An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness[END_REF]). This concept allows to distinguish between the actual trust that users of software put into the functioning / delivery of the software in questions on the one side, and the trustworthiness of the software, i.e. properties (we call them attributes) that justify the trust that users put "into" the software. Whereas trust should primarily be the subject of the "maintenance" of the relationship between the user and the software in use ("in operations"), trustworthiness is primarily acquired during the development process of the software and can mostly only be "lost" later on. The software creation process, neither, has been addressed adequately both in theory and practice until recently regarding topics like trust, trustworthiness or similar, except either purely theoretical approaches (such as formal proofs or other forms of verification (e.g. [START_REF] Leveson | Safety analysis using Petri nets[END_REF]) or on a functional level only (using e.g. security patterns [START_REF] Schumacher | Security Patterns: Integrating Security and Systems Engineering[END_REF]). As such, an analysis of existing software development practices / methodologies with a specific view on trustworthiness is new to the field. This research has been carried out as part of the OPTET project, and the results will be presented in this paper in adequate detail. As an overview publication, it summarizes results of other very recent publications [START_REF] Gol Mohammadi | An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness[END_REF]. This paper is structured as follows: in a first section, we define the notions of trust and trustworthiness and introduce the concept of trustworthiness attributes. The next section presents the analysis of the different development methodologies and practices in light of trustworthiness, followed by an analysis section on the state-of-the-art to summarize what is available today, and where there is more research needed to achieve the goal of trustworthy software. A last section summarize the research carried out and shortly indicates the future work planned in the OPTET project. Fundamentals In this section we introduce the two basic concepts "trust" and "trustworthiness" in order to be able to analyze how trustworthiness is addressed by different software development disciplines. Both concepts focus on the outcome of the STS but are different in the view of the trustor and trustee(s) perspective. In general, trust is the trustor's prior estimation that an STS will provide an appropriate outcome, while trustworthiness is the probability that the same STS will successfully meet all of the trustors' requirements. The balance between trust and trustworthiness is a core issue for software development because any imbalance (over-cautiousness or misplaced trust) could lead to serious negative impact, e.g. concerning the acceptance of the software by its (potential) users. The notion "Trust" We define trust in a system as a property of each individual trustor, expressed in terms of probabilities and reflecting the strength of their belief that engaging in the system for some purpose will produce an acceptable outcome. Thus, trust characterizes a state where the outcome is still unknown, based on each trustor's subjective perceptions and requirements. A stakeholder would decide to place trust on an STS if his trust criterion was successfully met; in other words, their perceptions exceed or meet its requirements. A trustor having engaged in a system for multiple transactions can (or will) update the current trust level of that STS by observing past outcomes. A presence of subjective factors in trust decisions means that two different trustors may have different levels of trust for the same STS to provide the same outcome in the future, even if they both have observed exactly the same system outcomes in the past. More specifically, subjective perceptions can depend on trustor attributes, which capture social factors such as age, gender, cultural background, level of experience with Internet-based applications, and view on laws. Subjective requirements, on the other hand, are represented by so-called trust attributes that quantify the anticipated utility gains or losses with respect to each anticipated outcome. Thus, relatively high levels of trust alone may not be adequate to determine a positive decision (e.g., if the minimum thresholds from requirements are even higher). Similarly, it is possible to engage in a system even if one's trust for an acceptable outcome is low (e.g., if the utility gains from this outcome are sufficiently high). 2.2 The notion "Trustworthiness" We regard trustworthiness as an objective property of the STS, based on the existence (or nonexistence) of appropriate properties and countermeasures that reduce the likelihood of unacceptable outcomes. A stakeholder (e.g., the system designer, a party performing certification) shall decide to what extent a system is trustworthy based on trustworthiness criteria. These criteria are logical expressions in terms of systems attributes, referred to as quality attributes. For example, trustworthiness may be evaluated with respect to the confidentiality of sensitive information, the integrity of valuable information, the availability of critical data, the response time or accuracy of outputs. Such quality attributes shall be quantified by measuring systems' (or individual components') properties and/or behavior. Objectivity in assessing trustworthiness for a particular attribute is based on meeting certain predefined metrics for this attribute or based on compliance of the design process for this attribute to our predefined system specifications. Thus, the trustworthiness of an STS may be evaluated compared to a target performance level, or the target may be its ability to prevent a threat from becoming active. Such issues are defined by the trustworthiness attributes that have a dual interpretation. Until recently, trustworthiness was primarily investigated from a security or loyalty perspective while assuming that single properties (certification, certain technologies or methodologies) of services lead to trustworthiness and even to trust in it by users. Compared to this approach, we reasonably assume that such a onedimensional approach is insufficient to capture all the factors that contribute to an STS's trustworthiness and instead we consider a multitude of attributes. In this paper, our definition for trustworthiness attributes reflects the design-time aspects. A trustworthiness attribute in this sense is a property of the system that indicates its capability to prevent potential threats to cause an unexpected and undesired outcome, e.g., a resilience assurance that it will not produce an unacceptable outcome. Trustworthiness of a software application In order to prove to be trustworthy, software applications could promise to cover a set of various quality attributes [START_REF] Gol Mohammadi | An Analysis of Software Quality Attributes and Their Contribution to Trustworthiness[END_REF], [START_REF] Mei | Internetware: A software paradigm for internet computing[END_REF] depending on their domain and target users. Trustworthiness should promise a wide spectrum including reliability, security, performance, and user experience. But trustworthiness is domain-and applicationdependent, and a relative attribute that means that if a system is trustworthy with respect to some Quality of Service (QoS) like performance, it would not necessarily be successful in being secure. Consequently, trustworthiness and trust should not be regarded as a single construct with a single effect, they are rather strongly context dependent in such a way that the criteria and measures for objectively assessing the trustworthiness of a software application are based on specific context properties, like the application domain and the user groups of the software. A broad range of literature has argued and emphasized the relation between QoS and trustworthiness (e.g. [START_REF] Neto | Untrustworthiness: A Trust-Based Security Metric[END_REF], [START_REF] San-Martín | A Cross-National Study on Online Consumer Perceptions, Trust, and Loyalty[END_REF], [START_REF]Quality Reference Model for SBA. S-Cube -European Network of Excellence[END_REF], [START_REF]Software Engineering -Product quality -Part: Quality Model[END_REF], [START_REF] Gomez | An Anticipatory Trust Model for Open Distributed Systems: From Brains to Individual and Social Behavior[END_REF]). Therefore, trustworthiness is influenced by a number of quality attributes other than just security-related. In the context of this work we strictly adhere to the perspective of a to-beconstructed system, and therefore will ignore potential trustworthiness attributes that are at the earliest available at runtime, like reputation or similar concepts representing other users' feedback. Additionally, some literature proposes quality attributes (e.g. authentication, authorization, data encryption or access control), that refer to means for achieving certain properties of a system. These means are not reflecting attributes but defining means for establishing the corresponding attributes within the system. Such "attributes" were not within the scope of our analysis. In prior work, we have investigated the properties and attributes of a software system that determines the trustworthiness of the system. To this end, based on the S-Cube quality reference model [START_REF] Chen | A Novel Server-based Application Execution Architecture[END_REF], we built a taxonomy of attributes (shown in Fig. 1) that is a foundation to define objective criteria and measures to assess the trustworthiness of a system. Some quality attributes referenced in the literature (e.g. [START_REF] Harris | The four levels of loyalty and the pivotal role of trust: a study of online service dynamics[END_REF], [START_REF] Yolum | Engineering self-organizing referral networks for trustworthy service selection[END_REF], [START_REF] Yan | An adaptive trust control model for a trustworthy component software platform, Autonomic and Trusted Computing[END_REF], [START_REF] Boehm | Quantitative Evaluation of Software Quality[END_REF]) refer to means for achieving a certain kind of property of a system. Therefore, we do not consider them as trustworthiness attributes, but as means to manifest the corresponding properties to the system. Only the attributes contributing to trustworthiness identified in literature review is included in the model. Some quality attributes, e.g. integrity, can be achieved, among other ways, through encryption. In this case, the high-level attribute (integrity) is included as a contributor to trustworthiness, but not encryption because it is encompassed by the higher-level attribute. We have included attributes that have been studied in the literature in terms of trustworthiness. Fig. 1 outlines the major result of that work. More details can be found in [START_REF] Leveson | Safety analysis using Petri nets[END_REF]. Actually, we have identified some additional attributes that are candidates for attributes that influence the trustworthiness of a system (e.g. provability, or predictability). These potential trustworthiness attributes need further investigation on their impact on trustworthiness. Based on these trustworthiness attributes, we have studied several software design methodologies with respect to the extent in which these methodologies address the systematic realization of trustworthiness to a system under development. In next section, the result and evaluation of these studies is presented. 3 Review of Development Models and Practices Recently, a number of development practices have been proposed, both from a theoretical as well as from a practical point of view, to address security of the software tobe-developed. As described above, security is an important component of trustworthy software, but neither is it the only one, nor will it be sufficient to look solely at preserving / creating a good level of security to attain trustworthiness. For example, transparency plays an important role for the creation of trust, and therefore for the trustworthiness of software. In this section, we will look into the major software engineering processes or process enhancements that target security to build a "secure" software system and identify corresponding innovation potential, specifically towards extending security to trustworthiness. A more exhaustive overview of development methodologies can for instance be found in Jayaswal and Patton's "Design for Trustworthy Software" [START_REF] Jayaswal | Design for Trustworthy Software: Tools, Techniques and Methodology for Developing Robust Software[END_REF], though it does not specify how these methodologies contribute to the trustworthiness of the product. This reference documents their generic characteristics and an overview of the historical evolution of different development strategies and lifecycle models. We will briefly describe which elements of the development approaches will actually increase or inhibit trust, and how the approaches could be used for modeling trustworthiness. Plan-driven In a plan-driven process [START_REF] Royce | Managing the Development of Large Software Systems: Concepts and Techniques[END_REF] one typically plans and schedules all of the process activities before the work can start. The Waterfall model is a well-known example of plan-driven development that typically includes the following phases: • Requirements analysis • System design • Implementation • Testing (unit, integration and system testing) • Deployment • Operation and maintenance Many of the simplistic software manufacturing projects follow a plan-driven model. This approach has been followed by industrial software development for a long time. It is relatively easy to assure non-functional requirements throughout the rest of the process, but the key issue is that they need to be identified completely in the first phase. Plan-driven processes such as the Waterfall model originate from aerospace and other manufacturing industries, where robustness and correctness is usually an important concern, but are often considered being too rigorous, inflexible and a bit old-fashioned for many software development projects. There are examples of Water-fall trustworthy software development processes in the literature, e.g. COCOMO. Therefore, there should be means to assure trustworthiness and enhance the process. There can be more formal variants of this process, for instance the B method [START_REF] Wordworth | Software Engineering with B[END_REF], where a mathematical model of the specification is created and then automatically transferred into code. For the general plan-driven process we consider the following trustworthiness characteristics to be valid: Trustworthiness gains: • Formal system variants are well suited to the development of systems that have stringent safety, reliability or security -and thus potentially also trustworthinessrequirements. Trustworthiness losses: • Vulnerable with vague, missing or incorrect security and trustworthiness requirements in the first place. • Does not offer significant cost-benefits over other approaches, which on a tight budget can lead to less focus on trustworthiness. • Little flexibility if new attacks or types of vulnerabilities are discovered late in the development process. • Usability for modeling trustworthiness In a plan-driven process one can apply structured testing on units as well as on a system as a whole. In addition, it is relatively easy to keep track of the implementation of safety, reliability or security and potentially also trustworthiness requirements. As such, the plan-driven approach supports modeling in general, but not specifically for trustworthiness. Incremental Incremental development (cf. [START_REF] Sommerville | Software Engineering. 9 th Edition[END_REF]) represents a broad range of related methodologies where initial implementations are presented to the user at regular intervals until the software satisfies the user expectations (or the money runs out). A fundamental principle is that not all requirements can be known completely prior to development. Thus, they are evolving as the software is being developed. Incremental development covers most of the agile approaches and prototype development, although it could be enhanced by other approaches to become more formal in terms of trustworthiness. Trustworthiness gains: • New and evolving requirements for trust may be incorporated as part of an iterative process. • The customer will have a good sense of ownership and understanding of the product after participating in the development process. Trustworthiness losses: • Mismatch between organizational procedures/policies and a more informal or agile process. • Little documentation, increasing complexity and long-lifetime systems may result in security flaws. Especially, documentation on non-functional aspects that are crosscutting among different software features implementation could not be well documented. • Security and trustworthiness can be difficult to test and evaluate, specifically by the user, and may therefore lose focus on the development. Incremental development allows new and evolving requirements for trustworthiness to be incorporated as part of an iterative process. Iterative processes allow for modeling of properties, but changes to the model that reflect changed or more detailed customer expectations, will in turn require changing the design and code, eventually in another iteration. Additionally, there are no specific trustworthiness modeling capabilities. Reuse-oriented Very few systems today are created completely from scratch; in most cases there is some sort of reuse of design or code from other sources within or outside the organization (cf. [START_REF] Sommerville | Software Engineering. 9 th Edition[END_REF]). Existing code can typically be used as-is, modified as needed or wrapped with an interface. Reuse is of particular relevance for service-oriented systems where services are mixed and matched in order to create larger systems. Reuseoriented methodologies can be very ad-hoc, and often there are no other means to assure trustworthiness. Trustworthiness gains: • The system can be based on existing parts that are known to be trustworthy. This does not, however, mean that the composition is just as trustworthy as the sum of its parts. • An existing, trustworthy part may increase trust (e.g. a known, trusted authentication). Trustworthiness losses: • Use of components that are "not-invented-here" leads to uncertainty. • Increased complexity due to heterogeneous component assembly. • The use of existing components in a different context than originally targeted may under certain circumstances (.e.g. unmonitored re-use of in-house developed components) jeopardize an existing security / trustworthiness property. This approach has both pros and cons regarding trustworthiness modeling. On the positive side, already existing, trustworthy and trusted components may lead to easier, trustworthiness modeling for the overall solution; adequate software assurance, e.g. a security certification, or source code availability may help in improving trustworthiness of re-used "foreign" components. The drawback is that there is a risk that the trustworthiness of the combined system may decrease due to the combination with less trustworthy components. Model-driven Model-driven engineering (MDE) [START_REF] Schmidt | Model-Driven Engineering[END_REF] (encompassing the OMG term Model-driven Architecture (MDA) and others) refers to the process of creating domain models to represent application structure, behavior and requirements within particular domains, and the use of transformations that can analyze certain aspects of these models and then create artifacts such as code and simulators. A lot of the development effort is put into the application design, and the reuse of patterns and best practices is central during the modeling. Trustworthiness gains: • Coding practices that are deemed insecure or unreliable can be eliminated through the use of formal reasoning. • Coding policies related to trustworthiness, reliability and security could be systematically added to the generated code. • Problems that lead to trustworthiness concerns can, at least theoretically, be detected early during model analysis and simulation. • Separation of concerns allows trust issues to be independent of platform, and also less complicated models and a better combination of different expertise. Trustworthiness losses: • Systems developed with such methods tends to be expensive to maintain, and may therefore suffer from lack of updates. • Requires significant training and tool support, which might become outdated. • A structured, model-driven approach does not prevent the forgetting of security and trustworthiness requirements. • Later changes during development need to review and potentially change the model. • The (time and space) complexity of the formal verification of especially nonfunctional properties may lead to omitting certain necessary computations when the project is under time and resource pressure. With a model-driven approach it is possible to eliminate deemed insecure or unreliable design and coding practices. An early model analysis and simulation with regards to trustworthiness concerns is possible and of high value. In addition, model-driven security tests could improve the trustworthiness. However, in general, there are no specific trustworthiness related modeling properties, it is just model-driven. The ma-jor drawback (and risk) is that the computational complexity for verifying nonfunctional properties is very high. Test-driven Test-driven development is considered to be part of agile development practices. In test-driven development, developers first implement test code that is able to test corresponding requirements, and only after that the actual code of a module, a function, a class etc. The main purpose for test-driven development is to increase the test coverage, thereby allowing for a higher quality assurance and thus requirements coverage, specifically related to non-functional aspects. The drawback of test-driven approaches consists in the fact that due to the necessary micro-iterations the design of the software is subject to on-going changes. This makes e.g. the combination of model-driven and test-driven approaches rather impossible. Trustworthiness gains: • The high degree of test coverage (that could be up to 100%) assures the implementation of trustworthiness related requirements. Trustworthiness losses: • The programming technique cannot be combined with (formal) assurance methodologies, e.g. using model-driven approaches, Common Criteria, or formal verification. Test-driven development is well suited for assuring the presence of well-described trustworthiness requirements. Moreover, this approach can be successfully used to address changes of the threat landscape. A major drawback, though, is that it cannot easily be combined with modeling techniques that are used for formal assurance methodologies. Common Criteria ISO 15408 The Common Criteria (CC) is a standardized approach [START_REF]:Information technology -Security techniques -Evaluation criteria for IT security -Part 1: Introduction and general model[END_REF] to evaluate security properties of (information) systems. A "Target of Evaluation" is tested against so-called "Security Targets" that are composed of given Functional Security Requirements and Security Assurance Requirements (both addressing development and operations) and are selected based on a protection requirement evaluation. Furthermore, the evaluation can be performed at different strengths called "Evaluation Assurance Level". On the downside, there are some disadvantages: the development model is quite stiff, and does not easily allow for an adjustment to specific environments. Furthermore, Common Criteria is an "all-or-nothing" approach, one can limit the Target of Evaluation or the Evaluation Assurance Level, but it is rather difficult to then express the overall security / trustworthiness of a system with metrics related to CC. • Evaluations related to security and assurance indicates to what level the target application can be trusted. • CC evaluations are performed by (trusted) third parties. • There are security profiles for various types of application domains. Trustworthiness losses:. • Protection profiles are not tailored for Cloud services. • A CC certification can be misunderstood to prove the security / trustworthiness of a system, but it actually does only provide evidence for a very specific property of a small portion of the system. The Common Criteria approach is unrelated to modeling in general, although the higher evaluation assurance levels would benefit from modeling. The functional security requirements may well serve as input for a (security-related) trustworthiness modeling, whereas the security assurance requirements, as the properties of the development process itself, shall be used for a modeling of the developing organization. Note that these constitute two different modeling approaches. ISO 21827 Systems Security Engineering -Capability Maturity Model Systems Security Engineering -Capability Maturity Model (SSE-CMM) is a specific application of the more generic Capability Maturity Model of the Software Engineering Institute at Carnegie Mellon University. Originally, in 1996 SSE-CCM was an initiative of the NSA, but was given over later to the International Systems Security Engineering Association, that published it as ISO 21827 in 2003. In contrast to the previous examples, SSE-CMM targets the developing organization and not the product / service to be developed. There are a number of so-called "base practices" (11 security base practices and 11 project and organizational base practices) that can be fulfilled to different levels of maturity. The maturity levels are identical to CMM. Trustworthiness gains: • The developing organization gains more and more experience in developing secure and more generically good quality software. • The use of a quality-related maturity model infers that user-centric non-functional requirements, such as security and trustworthiness, will be taken into account. Trustworthiness losses: • This is an organizational approach rather than a system-centric approach; hence there is not really any guarantee about the trustworthiness of the developed application (which could e.g. be put to use in another way than it was intended for). This approach focuses on the development of trustworthiness for the developing organization, instead on the to-be developed software, service or system. The security base practices may serve as input for modeling trustworthiness requirements when modeling the development process. Building Security In Maturity Model / OpenSAMM The Building Security In Maturity Model (BSIMM) [START_REF] Mcgraw | A Software Security Framework: Working Towards a Realistic Maturity Model[END_REF] initiative has recognized the caveat of ISO 21827 being oriented towards the developing organization, and has proposed a maturity model that is centralized around the software to be developed. It defines activities in four groups (Governance, Intelligence, SSDL Touch points, Deployment) that are rated in their maturity according to three levels. OpenSAMM is a very similar approach that has the same origin, but developed slightly differently and is now an OWASP project. This standard presents an ideal starting point for developing trustworthiness activities within an organization, since it allows tracking the maturity of the development process in terms of addressing security requirements -this could also be used for trustworthiness. Trustworthiness gains: • The maturity-oriented approach requires the identification of security (and potentially) trustworthiness properties and assures their existence according to different levels of assurance. • The probability of producing a secure (and trustworthy) system is high. Trustworthiness losses: • There is no evidence that the system actually is trustworthy or secure. This approach means to develop trustworthiness for the developing organization, instead of the to-be developed software, service, or system. The security base practices may serve as input for modeling trustworthiness requirements when modeling the development process. Microsoft SDL In 2001, Microsoft has started the security-oriented software engineering process that has probably had the largest impact across the whole software industry. Yet, the "process" was more a collection of individual activities along the software development lifecycle than a real structured approach. The focus point of the Microsoft SDLthat has been adopted by a large number of organizations in different variants -is that every single measure was optimized over time to either have a positive ROI or it was dropped again. This results in a number of industry-proven best practices for enhancing the security of software. Since there is no standardized list of activities, there is no benchmark to map activities against. Trustworthiness gains: • The world's largest software manufacturer does use this approach. • The identified measures have proven to be usable and effective over the course of more than a decade. Trustworthiness losses: • There is no evidence that the system actually is trustworthy or even secure. Microsoft SDL is a development-related threat modeling and was Microsoft`s major investment to increase the trustworthiness of its products ("Trustworthy Computing Initiative"). The comparability is only given if more detailed parameters are specified. For the modeling of trustworthiness, this method is only of limited help. Methodologies not covered in this paper During the analysis process, a significant number of other methodologies and approaches have been investigated, among others, ISO 27002, OWASP Clasp or TOGAF. We dropped these here since they either replicate some of the capabilities already mentioned above or because their contribution to trustworthiness showed to be rather small. Conclusions from the State of the Art Analysis After having analyzed the different methodologies and best practices, we can make two major observations. The first observation is related to the nature of the methodologies and best practices. There are two major types of approaches: • Evidence-based approaches that concentrate on evidences, i.e. some sort of qualitative "proof" that a certain level of security, safety etc. is actually met, and • Improvement-based approaches that concentrate on improving the overall situation within the software developing organization with regards to more or less specific requirements. Evidence-based approaches are typically relatively rigid and therefore often not used in practice, except there is an explicit need, e.g. for a certification in a specific market context. The origin of evidence-based approaches is either research or a strongly regulated market, such as e.g. the defense sector. In contrast to those, improvement-based approaches allow for customization and are therefore much better suited for the application in various industries, but lack in general the possibility to create any kind of evidence that the software developed actually fulfills some even fundamental trustworthiness expectations. Assuming that evidence-based and improvement-based approaches are -graphically speaking -at the opposite ends of a continuous one-dimensional space, a way to improve trustworthiness of software applications might be to identify approaches that are "sitting in between" these two types (for example, by picking and choosing elements of different approaches, augmented with some additional capabilities). One option might be to release the burden of qualitative evidence creation by switching to / encompassing evidences based on quantitative aspects. We propose to investigate how metrics for the trustworthiness attributes presented in Section 2 can be used to create evidences by applying selected elements of the improvement-based approaches. A second major observation relates to the scope of the activities described in the methodologies and best practices. There are three types of "scope": • Product-centric approaches emphasize the creation and/or verification of attributes of the to-be-developed software, • Process-centric approaches concentrate on process steps that need to be adhered to enable the fulfillment of the expected goal and • Organization-centric approaches focus on the capabilities of the developing organization, looking at a longer-term enablement to sustainably develop trustworthy software. Some approaches combine the scope, e.g. Common Criteria both mandates verifying product-related and process-related requirements, whereas others, such as SSE-CMM [START_REF]Information technology -Systems Security Engineering -Capability Maturity Model[END_REF] concentrate on only one scope. Current scientific discussions targeting trustworthiness related attributes are mainly focusing on product-centric approaches which is very understandable given the fact that this is the only approach that focuses on evidences on the software itself, whereas practices used in industry often tend towards a more process-or even organization-centric approach (SSE-CMM, CMM, ISO 9001). We therefore propose to investigate how to evolve the above-mentioned evidencebased activities around metrics towards covering process-and organization-centric approaches. Conclusion and Future Work In this paper we presented an overview on how existing development methods and practices support the development of trustworthy software. To this aim, we first elaborated on the notion of trust and trustworthiness and presented a general taxonomy for trustworthiness attributes of software. Then we analyzed some well-known general software development methodologies and practices with respect on how they support the development of trustworthy software. As we have shown in the paper, existing software design methodologies have some capacities in ensuring security. But, the treatment of other trustworthiness attributes and requirements in software development is not yet well studied. Trustworthiness attributes that have major impact on acceptance of STS, must be taken to account, analyzed, and documented as thoroughly as possible. In this way the transparency of the decisions under taken during the development will remove potentially the uncertainty of stakeholders of respective software. The main ideas and findings of our work will be further investigated. It is important to understand how the trustworthiness attributes and the corresponding system properties can be addressed in the system to be in a systematic way. As a next step, we will investigate trustworthiness evaluation techniques for enabling and providing effective measurements and metrics to assess trustworthiness of systems under development. Furthermore, we will develop an Eclipse Process Framework (EPF) based plug-in that will support the process of establishing trustworthiness attributes into a system and guiding the developer through the development activities. Using this plugin during the development process, the corresponding project team will be supported by guidelines, architectural patterns, and process chunks for developing trustworthy software and later on to analyze the results and evaluate the trustworthiness of the developed software. Fig. 1 . 1 Fig. 1. Attributes that determine trustworthiness of a software application during development Trustworthiness Attributes Security Compatibility Configuration related quality Compliance Cost Data related quality Dependability Performance Usability Correctness Complexity Accountability Throughput Auditability/ Stability Data Integrity Response Time Traceability Confidentiality Integrity Non-Repudiation Safety Openness Reusability Completeness Data Reliability Data Validity Data Timeliness Reliability Flexibility / Robustness Failure Tolerance Availability Accuracy Efficiency of Use Effectiveness Learnability Satisfaction Composability Scalability Maintainability Acknowledgements This research was carried out with the help of the European Commission's 7 th framework program, notably the project "OPTET". We specifically would like to thank all participants of Work Package 3 for contributing to the analysis of the methodologies and best practices.
38,555
[ "1004389", "998681", "998693" ]
[ "479602", "300612", "300612" ]
01492834
en
[ "info" ]
2024/03/04 23:41:50
2013
https://inria.hal.science/hal-01492834/file/978-3-642-40779-6_5_Chapter.pdf
Y Sreenivasa Rao email: ysrao@maths.iitkgp.ernet.in Ratna Dutta Decentralized Ciphertext-Policy Attribute-Based Encryption Scheme with Fast Decryption Keywords: attribute-based encryption, decentralized, multi-authority, monotone access structure In this paper, we propose an efficient multi-authority decentralized ciphertext-policy attribute-based encryption scheme dCP-ABE-MAS for monotone access structures (MAS). Our setup is without any central authority (CA) where all authorities function entirely independently and need not even be aware of each other. The scheme makes use of the minimal authorized sets representation of MAS to encrypt messages, and hence the size of ciphertext is linear in the number of minimal authorized sets in MAS and the number of bilinear pairings is constant during decryption. We describe several networks that can use dCP-ABE-MAS to control data access from unauthorized nodes. The proposed scheme resists collusion attacks and is secure against chosen plaintext attacks in the generic bilinear group model over prime order bilinear groups. Introduction In Attribute-Based Encryption (ABE), each user is ascribed a set of descriptive attributes (or credentials), and secret key and ciphertext are associated with an access policy or a set of attributes. Decryption is then successful only when the attributes of ciphertext or secret key satisfy the access policy. ABE is classified as Key-Policy ABE (KP-ABE) [START_REF] Goyal | Attribute Based Encryption for Fine-Grained Access Control of Encrypted Data[END_REF] or Ciphertext-Policy ABE (CP-ABE) [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF] according to whether the secret key or ciphertext is associated with an access policy, respectively. Since the invention of ABE [START_REF] Sahai | Fuzzy Identity-Based Encryption[END_REF], several improved ABE schemes [START_REF] Goyal | Attribute Based Encryption for Fine-Grained Access Control of Encrypted Data[END_REF][START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF][START_REF] Waters | Ciphertext-Policy Attribute-Based Encryption: An Expressive, Efficient, and Provably Secure Realization[END_REF][START_REF] Ibraimi | Efficient and Provable Secure Ciphertext-Policy Attribute-Based Encryption Schemes[END_REF] have been proposed. All the foregoing ABE schemes make use of a single trusted central authority (CA) to control the universe of attributes and issue secret keys to users that should not be compromised at all. Consequently, the CA can decrypt every ciphertext in the system encrypted under any access policy by calculating the required secret keys at any time, this is the key escrow problem of ABE. A solution to help mitigate the key escrow problem is distributing the functionality of the CA over many potentially untrusted authorities in such a way that as long as some of them are honest, the system would still be secure. An ABE with this mechanism is the so-called multi-authority ABE. In this scenario, each authority controls a different domain of attributes and issues attribute-related secret keys to users. Chase [START_REF] Chase | Multi-authority Attribute Based Encryption[END_REF] devised the first multi-authority ABE as an affirmative solution to the open problem posed by Sahai and Waters [START_REF] Sahai | Fuzzy Identity-Based Encryption[END_REF] that consists of one fully trusted centralized authority (CA) and multiple (attribute) authorities. Every user is assigned a unique global identifier and the keys from different authorities are bound together by this identifier to counteract the collusion attack-multiple users can pool their secret keys obtained from different authorities to decrypt a ciphertext that they are not individually entitled to. As CA holds the system's master secret, it can decrypt all the ciphertexts in the system, thereby cannot the key escrow resists. The first CA-free multi-authority ABE is proposed by Lin et al. [START_REF] Lin | Secure Threshold Multi Authority Attribute Based Encryption without a Central Authority[END_REF] wherein Distributed Key Generation (DKG) protocol and Joint Zero Secret Sharing (JZSS) protocol are deployed to remove CA. All authorities must interact to execute DKG and JZSS protocols during system setup phase. However, the scheme is collusion-resistant up to collusion of m users, where m is a system wide parameter that should be fixed during setup, and the number of JZSS protocol executions, the computation and communication costs are all linear in m. Chase and Chow [START_REF] Chase | Improving Privacy and Security in Multi-Authority Attribute-Based Encryption[END_REF] proposed CA-free multi-authority ABE with user privacy that resolves the key escrow problem using distributed Pseudo Random Functions (PRF). In this setting, each pair of authorities will communicate with each other via a 2-party key exchange protocol to generate users' secret keys during setup phase that incurs O(N 2 ) communication overhead on the system, where N is the fixed number of authorities. The foregoing constructions [START_REF] Chase | Multi-authority Attribute Based Encryption[END_REF][START_REF] Lin | Secure Threshold Multi Authority Attribute Based Encryption without a Central Authority[END_REF][START_REF] Chase | Improving Privacy and Security in Multi-Authority Attribute-Based Encryption[END_REF] can only handle a set of fixed number of authorities at system initialization which exploit AND-gate access policies in key-policy setting to prevent unauthorized data access. Müller et al. [START_REF] Müller | On Multi-Authority Ciphertext-Policy Attribute-Based Encryption[END_REF] gave two multi-authority CP-ABE schemes which employ one CA and several authorities where the authorities work independently from each other. However, the CA can still decrypt all ciphertexts in the system. The first construction uses Disjunctive Normal Form (DNF) access policies to annotate ciphertexts, thereby achieves constant computation cost during decryption. The second scheme realizes any Linear Secret Sharing Scheme (LSSS) access policy and hence the computation cost for successful decryption is linear in minimum number of attributes required to compute the target vector, i.e., a vector that contains the secret as one of its components. Lewko and Waters [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] proposed a novel multi-authority CP-ABE scheme without CA that is decentralized, where all authorities function entirely independently and need not even be aware of each other. The concept of global identifier introduced by Chase [10] is used to "link" attribute-related secret keys together that are issued to the same user by different authorities, this in turn achieves collusion-resistant among any number of users. The same scheme works on both composite order and prime order bilinear groups. The security of the former is given in random oracle model and the security of latter one is analyzed in the generic group model. In both cases, the monotone access structures are realized by LSSS, the ciphertext size is linear in the size of the LSSS, and the number of pairings is linear in the minimum number of attributes that satisfy the LSSS. Liu et al. [START_REF] Liu | Fully Secure Multi-authority Ciphertext-Policy Attribute-Based Encryption without Random Oracles[END_REF] devised a LSSS-realizable multi-authority CP-ABE system which has multiple CAs and authorities. The scheme is adaptively secure without random oracles unlike [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF]. In all the multi-authority KP/CP-ABE schemes except the one (CA based) in [START_REF] Müller | On Multi-Authority Ciphertext-Policy Attribute-Based Encryption[END_REF] discussed so far, the size of ciphertext is linear in the size of monotone span program or the number of attributes that are associated with ciphertexts and the number of bilinear pairing computations is linear in the minimum number of attributes required for successful decryption. Constant computation and low communication cost access control schemes are more practical where the computing resources have limited computing power and bandwidth is the primary concern. For these reasons, we provide a solution to help mitigate the problem of large ciphertext size and linear-size number of bilinear pairings in designing multi-authority ABE schemes. Our Contribution. We propose dCP-ABE-MAS, which is a multi-authority CP-ABE in a decentralized setting for any monotone access structure (MAS). Every MAS, A, can uniquely be represented by a set A 0 of minimal authorized sets in A (see Section 2.1). This scheme has the same functionality as the most robust and scalable multi-authority CP-ABE [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] to date. Even though the schemes [START_REF] Chase | Improving Privacy and Security in Multi-Authority Attribute-Based Encryption[END_REF][START_REF] Lin | Secure Threshold Multi Authority Attribute Based Encryption without a Central Authority[END_REF] exclude the requirement of the CA, they are not fully decentralized as the number of authorities is fixed ahead of time and all authorities are communicating each other during system setup unlike [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF]. That is why we compare (in Table 1 1) our dCP-ABE-MAS only with the decentralized scheme2 of [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] in view of prime order bilinear group setting. G User Secret E G E G T Ciphertext Size E G T Pe Access Key Size Policy [8] 2γ γB G 3α 2α + 1 2αB G + (α + 1)B G T + τ O(β) O(β) LSSS Our 2γ γB G 2k k 2kB G + kB G T + τ - 2 any MAS E G (or E G T ) = number The ciphertext size in [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] is linear in the size, α, of LSSS, while the size of ciphertext in our construction grows linearly with k, the number of minimal authorized sets in the MAS. For (t, n)-threshold policy, where 1 < t < n, the value of k = n!/(n -t)! t! which will be larger than n, whereas there exist a LSSS with size α = n to realize the (t, n)-threshold policy. However, there are several classes of MAS for which the value of k is constant but the size of the monotone span program (or LSSS) computing the MAS is at least polynomial in the number of attributes in the access structure. As a trivial case, if one uses a single AND-gate with n attributes, the value of k will be 1, while the size of LSSS is equal to n, i.e., α = n. We now consider some non-trivial cases from [START_REF] Pandit | Efficient Fully Secure Attribute-Based Encryption Schemes for General Access Structures[END_REF]. Let A 0 = B 1 = {a 1 , . . . , a n/2 }, B 2 = {a n/2 +1 , . . . , a n } be the set of minimal sets for a MAS, A, over n attributes a 1 , . . . , a n . Then, k = 2 and the size, α, of LSSS computing A is at least O(n). Similarly, if A 0 = {B 1 = {a 1 , . . . , a n/3 }, B 2 = {a n/3 +1 , . . . , a 2n/3 }, B 3 = {a 2n/3 +1 , . . . , a n }} is the set of minimal sets for a MAS, A, then k = 3 but the size, α, of LSSS computing A is at least O(n) (for more details see Section 2.1 in [START_REF] Pandit | Efficient Fully Secure Attribute-Based Encryption Schemes for General Access Structures[END_REF]). Thus, in such cases, our dCP-ABE-MAS scheme exhibits shorter ciphertext. Moreover, our approach requires only 2 pairing computations to decrypt any ciphertext. The user secret key size is linear in the number of attributes associated with the user. An inherent drawback of [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] is that every authority can independently decrypt every ciphertext in the system, if the set of attributes controlled by the authority satisfies the LSSS access structure associated with the ciphertext. However, this can be avoided if each authorized set contains attributes from at least two different authorities. The same problem can be eliminated in our dCP-ABE-MAS if each minimal authorized set contains attributes from at least two different authorities. This fact follows from satisfiability condition given in Definition 2. We discuss how our dCP-ABE-MAS can provide attractive solutions to finegrained access control in various network scenarios and compare our work with the existing works in the area. Additionally, our multi-authority scheme provides a mechanism for packing multiple messages in a single ciphertext. This in turn reduces network traffic significantly. The proposed scheme is proven to be collusion-resistant and is secure against chosen plaintext attacks in the generic bilinear group model. To the best of our knowledge, our proposed multi-authority CP-ABE scheme is the only scheme in a decentralized framework where the decryption time is constant for general MAS. Preliminaries Definition 1. Let G and G T be multiplicative cyclic groups of prime order p. Let g be a generator of G. A mapping e : G × G → G T is said to be bilinear if e(u a , v b ) = e(u, v) ab , for all u, v ∈ G and a, b ∈ Z p and non-degenerate if e(g, g) = 1 T (where, 1 T is the unit element in G T ). We say that G is a bilinear group if the group operation in G can be computed efficiently and there exists G T for which the bilinear map e : G × G → G T is efficiently computable. Access Structure In this section, we briefly review the concept of general access structures [START_REF] Stinson | Cryptography: Theory and Practice[END_REF]. Let U be the universe of attributes and |U | = n. Let P(U ) be the collection of all subsets of U. Every subset of P(U ) \ {∅} is called an access structure. An access structure A is said to be monotone access structure (MAS) if {C ∈ P(U )|C ⊇ B, for some B ∈ A} ⊆ A. The sets in A are called the authorized sets and the sets not in A are called the unauthorized sets with respect to the monotone access structure A. Then every superset of an authorized set is again authorized set in MAS. A set B in a monotone access structure A is a minimal authorized set in A if there exists a set D( = B) such that D ⊆ B, then D / ∈ A. The set of all minimal authorized sets of A, denoted by A 0 , is called the basis of A. Then we can generate A from its basis A 0 as follows: A = {C ∈ P(U )|C ⊇ B, for some B ∈ A 0 }. (1) Lemma 1. The monotone access structure A given in Eq. ( 1) is generated uniquely from its basis A 0 . Proof. Suppose A is a monotone access structure generated from A 0 . Then A = {C ∈ P(U )|C ⊇ B , for some B ∈ A 0 }. We shall prove that A = A . Let C ∈ A. Then by Eq. ( 1), we have U ⊇ C ⊇ B, for some B ∈ A 0 and hence C ∈ A . Therefore, A ⊆ A . Similarly, we can have A ⊆ A. Thus, A = A . In sum, every monotone access structure can be represented by its basis. Definition 2. Let A be a monotone access structure and A 0 be its basis. A set, L, of attributes satisfies A, denoted as L |= A if and only if L ⊇ B, for some B ∈ A 0 , and otherwise L does not satisfy A, denoted as L |= A. Decentralized CP-ABE System A decentralized CP-ABE system is composed mainly of a set A of authorities, a trusted initializer and users. The only responsibility of trusted initializer is generation of system global public parameters, which are system wide public parameters available to every entity in the system, once during system initialization. Each authority A j ∈ A controls a different set U j of attributes and issues corresponding secret attribute keys to users. We note here that all authorities will work independently. As such, every authority is completely unaware of the existence of the other authorities in the system. Each user in the system is identified with a unique global identity ID ∈ {0, 1} * and is allowed to request secret attribute keys from the different authorities. At any point of time in the system, each user with identity ID possesses a set of secret attribute keys that reflects a set L ID of attributes, which we call an attribute set of the user with identity ID. Let U = Aj ∈A U j , where U j1 ∩ U j2 = ∅, for all j 1 = j 2 , be the attribute universe of the system. Due to lack of global coordination between authorities, different authorities may hold the same attribute string. To overcome such scenario, we can treat each attribute as a tuple consisting of the attribute string and the controlling authority identifier, for example ("supervisor", j), where the attribute "supervisor" is held by the authority A j . Consequently, the attributes ("supervisor", j 1 ) and ("supervisor", j 2 ) will be considered as distinct as long as j 1 = j 2 . The decentralized CP-ABE system consists of the following five algorithms. System Initialization(κ). At the initial system setup phase, a trusted initializer chooses global public parameters GP according to the security parameter κ. Any authority or any user in the system can make use of these parameters GP in order to perform their executions. Authority Setup(GP, U j ). This algorithm is run by every authority A j ∈ A once during initialization. It accepts as input the global public parameters GP and a set of attributes U j for the authority A j and outputs public key PubA j and master secret key MkA j of the authority A j . Authority KeyGen(GP, ID, a, MkA j ). Every authority executes this algorithm upon receiving a secret attribute key request from the user. It will take as input global public parameters GP, a global identity ID of a user, an attribute a hold by some authority and the master secret key of the corresponding authority. It returns a secret attribute key SK a,ID for the identity ID. Encrypt(GP, M, A, {PubA j }). This algorithm is run by an encryptor and it takes as input the global public parameters GP, a message M to be encrypted, an access structure A, and public keys of relevant authorities corresponding to all attributes appeared in A. It then encrypts M under A and returns the ciphertext CT, where A is embedded into CT. Decrypt(GP, CT, {SK a,ID |a ∈ L ID }). On receiving a ciphertext CT, a decryptor with identity ID runs this algorithm with the input the global public parameters GP, a ciphertext CT which is an encryption of M under A, and {SK a,ID |a ∈ L ID } is a set of secret attribute keys obtained for the same identity ID. Then it outputs the message M if the user attribute set L ID satisfies the access structure A; otherwise, decryption fails. Security Model Following [START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF], we define a security model in terms of a game which is carried out between a challenger and an adversary, where the challenger plays the role of all authorities. The adversary can corrupt authorities statically, i.e., the adversary has to announce the list of corrupted authorities before obtaining the public keys of honest authorities, whereas key queries can be made adaptively. Setup. First, the challenger obtains global public parameters GP. The adversary announces a set A ⊂ A of corrupt-authorities. Now, the challenger runs Authority Setup algorithm for each honest authority and gives all public keys to the adversary. Key Query Phase 1. The adversary is allowed to make secret key queries for the attributes coupled with user global identities (a, ID), where the attributes a are held by honest authorities. The challenger runs Authority KeyGen algorithm and returns the corresponding secret keys SK a,ID to the adversary. Challenge. The adversary submits two equal length messages M 0 , M 1 and an access structure A. The access structure A must obey the following constraint. Let F be a set of attributes belonging to the corrupt-authorities that are in A. For each identity ID, let F ID be the set of attributes in A for which the adversary has queried (a, ID). For each identity ID, the attribute set F ∪ F ID must not satisfy the access structure A, i.e., (F ∪ F ID ) |= A. The adversary needs to give the challenger the public keys of corrupt-authorities whose attributes are in A. Now, The challenger flips a random coin µ ∈ {0, 1} and runs Encrypt algorithm in order to encrypt M µ under A. The resulting challenge ciphertext CT * is given to the adversary. Key Query Phase 2. The adversary can make additional secret key queries for (a, ID) with the same restriction on the challenge access structure stated in Challenge phase. Guess. The adversary outputs a guess bit µ ∈ {0, 1} for the challenger's secret coin µ and wins if µ = µ. The advantage of an adversary in this game is defined to be |Pr [µ = µ] -1 2 | , where the probability is taken over all random coin tosses of both adversary and challenger. Definition 3. The decentralized CP-ABE system is said to be IND-CPA (ciphertext indistinguishability under chosen plaintext attacks) secure against static corruption of authorities if all polynomial time adversaries have at most a negligible advantage in the above security game. dCP-ABE-MAS In this section, we present a decentralized CP-ABE scheme for monotone access structures, dCP-ABE-MAS. Note that every monotone access structure A is represented by its basis A 0 which is the set of minimal authorized sets in A. System Initialization(κ). During system initialization phase, a six tuple GP = (p, G, g, G T , e, H) is chosen as global public parameters, where p is a prime number greater than 2 κ , G, G T are two multiplicative cyclic groups of same prime order p, g is a generator of G, e : G × G → G T is a bilinear map and H : {0, 1} * → G is a collision resistant hash function which will be modeled as a random oracle in our security proof. Authority Setup(GP, U j ). Each authority A j ∈ A possesses a set of attributes U j . For each attribute a ∈ U j , A j selects two random exponents t a , t a ∈ Z p , and computes P a = g ta , P a = e(g, g) t a . The public key of A j is published as PubA j = {(P a , P a )|a ∈ U j }. The master secret key of the authority A j is MkA j = {(t a , t a )|a ∈ U j }. Authority KeyGen(GP, ID, a, MkA j ). When a user with unique global identity ID ∈ {0, 1} * requests for a secret key associated with an attribute a which is held by A j , the authority A j returns SK a,ID = g t a H(ID) ta to the user. Encrypt(GP, M, A 0 , {PubA j }). Here A 0 is the basis for a monotone access structure A. Let A 0 = {B 1 , B 2 , . . . , B k }, where each B i ⊂ U is a minimal authorized set in A. The set {PubA j } is a set of public keys of all authorities which are managing the attributes in A 0 . In order to encrypt a message M ∈ G T , the encryptor chooses a random exponent s i ∈ Z p , for each i, 1 ≤ i ≤ k, and computes C i,1 = M • a∈Bi P a si , C i,2 = g si and C i,3 = a∈Bi P a si . (2) The encryptor outputs the ciphertext CT = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} . Decrypt(GP, CT, {SK a,ID |a ∈ L ID }). When a user with global identity ID ∈ {0, 1} * receives a ciphertext CT, it first computes H(ID). Suppose the attribute set L ID of this user satisfies the monotone access structure A generated by A 0 = {B 1 , B 2 , . . . , B k }. Then L ID ⊇ B i , for some B i ∈ A 0 . The receiver now aggregates the secret attribute keys associated with the attributes appeared in the minimal authorized set B i and computes K i = a∈Bi (SK a,ID ) . The message can then be obtained by computing C i,1 • e (H(ID), C i,3 ) e (K i , C i,2 ) = M • e(g, g) sib i • e(H(ID), g sibi ) e(g b i H(ID) bi , g si ) = M, where b i = b∈Bi t a and b i = b∈Bi t a . We will use the notations b i and b i in our security proof. Remark 1. An encryptor can pack different messages, say M 1 , M 2 , . . . , M k , where k is equal or smaller than the size of a basis of a monotone access structure, in a single ciphertext by using the following encryption algorithm. multi.Encrypt(GP, {M 1 , M 2 , . . . , M k }, A 0 , {PubA j }). Let A be a monotone access structure generated by its basis A 0 = {B 1 , B 2 , . . . , B k }. For each i, 1 ≤ i ≤ k, the encryptor chooses a random exponent s i ∈ Z p and computes the ciphertext CT = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} , where C i,1 = M i • ( a∈Bi P a ) si , C i,2 = g si and C i,3 = ( a∈Bi P a ) si . On receiving the ciphertext CT = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} , the recipient can be recovered respective message M i by executing the decryption algorithm Decrypt(CT, {SK a,ID |a ∈ L ID }, GP}) of dCP-ABE-MAS. The deployment of this mechanism will be discussed in Section 5. Security Analysis In this section, we first argue our dCP-ABE-MAS is secure against collusion attacks. We then prove dCP-ABE-MAS is IND-CPA secure in the generic bilinear group model (we refer the reader to [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF] for definition). Security against collusion attacks. A scheme is said to be collusion-resistant if no two or more recipients can combine their secret keys in order to decrypt a message that they are not entitled to decrypt alone. We will show that if two users with identities ID, ID try to collude and combine their secret keys, they will fail in decryption process even though their attributes associated with secret keys satisfy the monotone access structure A. Note that A 0 = {B 1 , B 2 , . . . , B k } is a basis for A. The encryption algorithm blinds the message M with e(g, g) sib i . Consequently, the decryptor needs to recover the blinding term e(g, g) sib i by coupling their secret keys for attribute and identity pairs (a, ID) with the respective ciphertext components. If the decryptor has a satisfying set of keys with the same identity ID, i.e., {SK a,ID |a ∈ B i }, for some i, then the decryptor can recover the blinding term from the following computation. e(K i , C i,2 ) e(H(ID), C i,3 ) = e(g, g) sib i • a∈Bi e(H(ID), g) sita a∈Bi e(H(ID), g) sita = e(g, g) sib i . Suppose two users with different identities ID and ID try to collude and combine their secret attribute keys such that L ID ⊃ B i and L ID ⊃ B i , for any 1 ≤ i ≤ k but L ID ∪ L ID ⊇ B i , for some B i . Then K i = a∈B i,ID SK a,ID • a∈B i,ID SK a,ID , where B i,ID = L ID ∩ B i and B i,ID = L ID ∩ B i . Consequently, there will be some terms of the form e(H(ID), g) sita in denominator and some terms of the form e(H(ID ), g) sita in numerator which will not cancel with each other as H is collision resistant, i.e., H(ID) = H(ID ), thereby preventing the recovery of the blinding term e(g, g) sib i , so is the message M. This demonstrates that dCP-ABE-MAS scheme is collusion-resistant. Guess: ADV 1 outputs his guess ν ∈ {0, 1} on ν. If ν = ν, ADV 2 outputs as its guess µ = 1; otherwise he outputs µ = 0. -In the case where µ = 1, CT is a correct ciphertext of M ν . Consequently, ADV 1 can output ν = ν with the advantage , i.e., Pr[ν = ν|µ = 1] = 1 2 + . Since ADV 2 guesses µ = 1 when ν = ν, we get Pr[µ = µ|µ = 1] = 1 2 + . -In the next case where µ = 0, the challenge ciphertext CT * is independent of the messages M 0 and M 1 , so ADV 1 cannot obtain any information about ν. Therefore, ADV 1 can output ν = ν with no advantage, i.e., Pr [ν = ν|µ = 0] = 1 2 . Since ADV 2 guesses µ = 0 when ν = ν, we get Pr[µ = µ|µ = 0] = 1 2 . Thus, advantage of ADV 2 = Pr[µ = µ] -1 2 ≥ 1 2 • ( 1 2 + ) + 1 2 • 1 2 -1 2 = 2 . This proves the claim 1. This claim demonstrates that any adversary that has a non-negligible advantage in GAME 1 can have a non-negligible advantage in GAME 2 . We shall prove that no adversary can have non-negligible advantage in GAME 2 . From now on, we will discuss the advantage of the adversary in GAME 2 , wherein the adversary must distinguish between e(g, g) sib i and e(g, g) δi . Simulation in GAME 2 : To simulate the modified security game GAME 2 , we use the generic bilinear group model given in [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF]. Consider two injective random maps ψ, ψ T : Z p → {0, 1} 3 log(p) . In this model every element of G and G T is encoded as an arbitrary random string from the adversary's point of view, i.e., G = {ψ(x)|x ∈ Z p } and G T = {ψ T (x)|x ∈ Z p }. The adversary is given three oracles to compute group operations of G, G T and to compute the bilinear pairing e. The input of all oracles are string representations of group elements. The adversary is allowed to perform group operations and pairing computations by interacting with the corresponding oracles only. It is assumed that the adversary can make queries to the group oracles on input strings that were previously been obtained from the simulator or were given from the oracles in response to the previous queries. This event occurs with high probability. Since |ψ(Z p )| > p 3 and |ψ T (Z p )| > p 3 , the probability of the adversary being able to guess an element (which it has not previously obtained) in the ranges of ψ, ψ T is negligible. The notations g x := ψ(x) and e(g, g) x := ψ T (x) are used in the rest of the proof. With this notation, g and e(g, g) can be represented as ψ(1) and ψ T (1), respectively. Setup: Note that A is the set of all authorities in the system and U is the attribute universe. The simulator obtains the global public parameters GP from the trusted system initializer and gives ψ(1) to the adversary. The adversary sends a corrupted authority list A ⊂ A to the simulator. For each attribute a ∈ U controlled by honest authorities, the simulator chooses two new random values t a , t a ∈ Z p , computes g ta , e(g, g) t a using respective group oracles and gives P a = ψ(t a ), P a = ψ T (t a ) to the adversary. Query Phase 1: The adversary issues hash and secret key queries, and consequently the simulator responds as follows. Hash queries: When the adversary requests H(ID) for some user identity ID for the first time, the simulator chooses a new, unique random value u ID ∈ Z p , computes g u ID = ψ(u ID ) using group oracle and gives ψ(u ID ) to the adversary as H(ID). The association between values u ID and the user identities ID is stored in Hlist so that it can reply consistently for subsequent queries in the future. Secret key queries: If the adversary requests for a secret key of an attribute a with identity ID, the simulator computes g t a H(ID) ta using the group oracle and returns SK a,ID = ψ(t a + u ID t a ) to the adversary. If H(ID) has not been stored in Hlist, it is determined as above. Challenge: In order to obtain a challenge ciphertext CT * , the adversary specifies the basis A 0 = {B 1 , B 2 , . . . , B k } of a monotone access structure A along with the public keys g ta , e(g, g) t a of attributes a ∈ U which are controlled by corrupted authorities and appeared in A 0 as members in several B i . The simulator then checks the validity of these public keys by querying the group oracles. Now, the simulator chooses a random s i for the i-th minimal set of A 0 , for each i, 1 ≤ i ≤ k and computes b i = a∈Bi t a . The simulator then flips a random coin µ ∈ {0, 1} and if µ = 1, he sets δ i = s i b i , where b i = a∈Bi t a , otherwise δ i is set to be a random value from Z p . The simulator finally computes the components of challenge ciphertext CT * by using group oracles as follows. C i,1 = ψ T (δ i ), C i,2 = ψ(s i ), C i,3 = ψ(s i b i ) for all i, 1 ≤ i ≤ k. The ciphertext CT * = A 0 , {C i,1 , C i,2 , C i,3 |1 ≤ i ≤ k} is sent to the adversary. Query Phase 2: The adversary issues more hash and secret key queries. The simulator responds as in Query Phase 1. We note that if the adversary requests for secret keys of a set of attributes that allow decryption in combination with secret keys obtained from corrupted authorities, then the simulator is aborted. The adversary now can have in his hand, all values that consists of encodings of random values δ i , 1, u ID , t a , t a , s i and combination of these values given by the simulator (e.g., ψ(t a + u ID t a )) or results of queries on combination of these values to the oracles. In turn, we can think of each query of the adversary is a multivariate polynomial in the variables δ i , 1, u ID , t a , t a , s i , where a ranges over the attributes controlled by honest authorities, i ranges over the minimal sets in the basis of monotonic access structure and ID ranges over the allowed user identities. We assume that any pair of the adversary's queries on two different polynomials result in two different answers. This assumption is false only when our choice of the random encodings of the variables ensures that the difference of two polynomial queries evaluates to zero. Following the security proof in [START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF], it can be claimed that the probability of any such collision is at most O(q 2 /p), q being an upper bound on the number of oracle queries made by the adversary during the entire simulation. Therefore, the advantage of the adversary is at most O(q 2 /p). We assume that no such random collisions occur while retain 1 -O(q 2 /p) probability mass. Under this condition, we show that the view of the adversary in GAME 2 is identically distributed when δ i = s i b i if µ = 1 and δ i is random if µ = 0, and hence the adversary cannot distinguish them in the generic bilinear group model. To prove this by contradiction, let us assume that the views are not identically distributed. The adversary's views can only differ when there exists two queries Table 2. Possible adversary's query terms in GT (here, the variables a, a are possible attributes, ID, ID are authorized user identities and i, i are indices of the minimal sets in the monotone access structure). ta tat a u ID u ID bi(t a + u ID ta) sis i u ID tau ID u ID (t a + u ID ta) si(t a + u ID ta) sis i b i t a + u ID ta t a (t a + u ID ta) u ID bi sibi(t a + u ID ta) sis i bib i bi tabi u ID si bib i t a si tasi u ID sibi sib i b i sibi tasibi (t a + u ID ta)(t a + u ID t a ) sibib i q 1 and q 2 in G T such that q 1 = q 2 with q 1 | (δi=sib i ) = q 2 | (δi=sib i ) , for at least one i. Fix one such i. Since δ i only appears as ψ T (δ i ) and elements of ψ T cannot be used as input of this oracle takes elements of ψ as input, the adversary can only make queries of the following form involving δ i : q 1 = c 1 δ i + q 1 and q 2 = c 2 δ i + q 2 , for some q 1 and q 2 that do not contain δ i , and for some constants c 1 and c 2 . Since q 1 | (δi=sib i ) = q 2 | (δi=sib i ) , we have c 1 s i b i + q 1 = c 2 s i b i + q 2 and it gives q 2 -q 1 = (c 1 -c 2 )s i b i = cs i b i , for some constant c = 0. Therefore, the adversary can construct the query ψ T (cs i b i ), for some constant c = 0, yielding a contradiction to our claim 2 proved below. Hence the adversary's views in GAME 2 are identically distributed, i.e., the adversary has no non-negligible advantage in GAME 2 , so in the original game GAME 1 by claim 1. Claim 2 : The adversary cannot make a query of the form ψ T (cs i b i ) for any non-zero constant c and any i. Proof of Claim 2: To establish this claim, we examine the information given to the adversary during the entire simulation and perform case analysis based on that information. In Table 2, we list all the possible adversary's query terms in G T by means of the bilinear map and group elements given to the adversary during the simulation. It can be seen that the adversary can query for an arbitrary linear combination of 1 (which is ψ T (1)), δ i and the terms given in Table 2. We will now show that no such linear combination can produce a term of the form cs i b i for any non-zero constant c and any i. Note that the adversary knows the values of t a , t a for attributes a that are controlled by the corrupted authorities, so these can appear in a foregoing linear combinations as the coefficients of the terms given in Table 2. We note that s i b i = a∈Bi s i t a . From Table 2 we see that the only way for an adversary to create a term containing s i t a is by pairing s i with t a + u ID t a . Consequently, the adversary can create a query polynomial of the form a∈B (c (i,a) s i t a + c (i,a,ID) u ID s i t a ), (3) for some set of attributes B and non-zero constants c (i,a) , c (i,a,ID) . In order to get a query polynomial of the form cs i b i the adversary must add other terms to cancel the extra terms a∈B c (i,a,ID) u ID s i t a . For any terms c (i,a,ID) u ID s i t a where a is an attribute held by a corrupted authority, the value of t a is revealed to the adversary, thereby the adversary can form the term -c (i,a,ID) u ID s i t a in order to cancel this from the polynomial given in Eq. ( 3). For terms c (i,a,ID) u ID s i t a where a is an attribute controlled by an uncorrupted authority, the adversary cannot construct terms to cancel these from the polynomial given in Eq. ( 3) since there is no term in Table 2 that enables the adversary to construct a term of the form -c (i,a,ID) u ID s i t a . Consequently, the adversary's query polynomial cannot be of the form cs i b i . Suppose for some identity ID, a set B of attributes in B belong to the corrupted authorities or the adversary has obtained secret keys {SK a,ID |a ∈ B } such that B ⊇ B i , for some i, 1 ≤ i ≤ k. Then the adversary can construct a query polynomial of the form a∈Bi (cs i t a + c ID u ID s i t a ), (4) for some non-zero constant c and c ID . The query polynomial given in Eq. ( 4) is same as cs i a∈Bi t a + c ID u ID s i a∈Bi t a = cs i b i + c ID u ID s i b i . The extra term c ID u ID s i b i here will be canceled by using the term u ID s i b i appeared in Table 2. In this case, even though the adversary becomes successful, the constraint mentioned in the Challenge phase of the security game is violated and simulator is aborted. We have shown that the adversary cannot make a query polynomial of the form cs i b i , for any constant c = 0 and any i, without violating the assumptions stated in the security game. This proves the claim 2 and hence the theorem. Applications In this section, we propose an access control scheme in various network scenarios that make use of our dCP-ABE-MAS and then compare our scheme with the existing schemes in the respective areas. Vehicular Ad Hoc Network: Typically, a vehicular ad hoc network (VANET) mainly consists of three kinds of entities-trusted initializer (TI), road side units (RSUs) and vehicles which are equipped with wireless communication devices, called on-board units (OBUs). During registration phase, each vehicle is assigned by the TI a set of persistent attributes (e.g., year, model), which remains constant throughout the lifetime of a vehicle, and a set of different pseudonyms, which preserves location privacy of the vehicle. We assume that each vehicle is capable of changing pseudonyms from time to time. In addition, TI gives each vehicle a set of secret keys associated with the persistent attributes for each pseudonym of that vehicle. These attributes and keys are preloaded into vehicle's OBU. There are several RSUs which are distributed across the network in a uniform fashion and each RSU provides infrastructure support for a specified region which we call communication range of that RSU. Each RSU controls a set of dynamic attributes (e.g., road name, vehicle speed). When a vehicle enters within communication range of an RSU, the RSU gives it certain dynamic attributes along with corresponding secret attribute keys after receiving a certificate relating the current pseudonym of the vehicle. We assume that there are secure communication channels between vehicles and TI as well as vehicles and RSUs. Note that the authorities in our dCP-ABE-MAS play the role of RSUs and the attribute universe is combination of all persistent and dynamic attributes involved in the network. Every persistent attribute is different from every dynamic attribute and the attributes controlled by two different RSUs are all different from each other. The pseudonym can be treated as vehicle's identity. The setup and key generation algorithms of TI are same as authorities' setup and key generation algorithms, respectively. Vehicles can encrypt and decrypt messages. RSUs can also encrypt messages for a set of selected vehicles. When a vehicle wants to send a message M to other vehicles in the network regarding the road situation (e.g., a car accident is ahead), it decides firstly the intended vehicles (e.g., ambulance, police car, breakdown truck) and then formulates an associated MAS in terms of minimal authorized sets over some attributes (both persistent and dynamic), for example, A 0 = {B 1 , B 2 , B 3 }, where B 1 = {ambulance, road1}, B 2 = {policecar, lane2} and B 3 = {breakdowntruck, road2}. The encryptor vehicle then uses the public keys of the attributes occurring in the access structure to encrypt a message and transmits the ciphertext. A recipient vehicle whose attribute set satisfies the access structure will only be able to decrypt the message. Refer to the above example, consider a scenario where the encryptor vehicle needs to send a different message to each category of vehicles-ambulance, police car, breakdown truck. Consequently, it has to encrypt each message separately under respective access structure for each category. In turn, the number of encryptions will grow linearly with the number of categories. In such cases, the proposed multi.Encrypt algorithm (described in Remark 1) can pack multiple messages in a single ciphertext, thereby reduces network traffic significantly, in such a way that the respective message will only be decrypted by the intended category of vehicles. This helps in the widespread dissemination of messages and early decision making in such a highly dynamic network environments. The comparison of proposed scheme, say Scheme 1 in the VANET scenario, with the existing scheme [START_REF] Ruj | Improved Access Control Mechanism in Vehicular Ad Hoc Networks[END_REF] is presented in Table 3, 4. Distributed Cloud Network: The cloud storage system is composed of five entities: trusted initializer (TI), key generation authorities (KGAs), cloud, data owner (data provider) and users (data consumers). The only responsibility of TI is generation of global public parameters GP of the system and assignment of a unique global identity ID to each user in the system. Each key generation authority controls a different set of attributes and generates public and secret keys for all attributes that it holds. The KGAs are also responsible to distribute secret keys for users' attribute sets on request according to their role or identity. The KGAs could be scattered geographically far apart and execute assigned tasks independently. The authorities in our dCP-ABE-MAS act as KGAs. The cloud is an external storage server that allows the data owners to store their data in the cloud in order to share their data securely to intended users. The data owners enforce an access control policy in the form of a MAS into ciphertext in such a way that only intended users can recover the data and sign the message by employing an efficient attribute-based signature scheme. Finally, the ciphertext along with signature is sent to the cloud. The cloud first verifies the signature and stores the ciphertext if the signature is valid. Each user can obtain ciphertexts from the cloud on demand. However, the users can decrypt the ciphertext only if the set of attributes associated with their secret keys satisfy the access control policy embedded in the ciphertext. Consider a health-care scenario where the patients can be data providers, and doctors, medical researchers and health insurance companies can be data consumers. For example, a patient wishes to store his medical history in the cloud for specific users as follows: brain scan records, M 1 , for any neurologist from hospital X, ECG (Electrocardiography) reports, M 2 , for any cardiologist and Ultrasound reports, M 3 , for any radiology researcher from any medical research center. In such setting, the multi.Encrypt algorithm (described in Remark 1) is well suited to pack all the three messages in a single ciphertext. To this end, the patient first formulates a MAS whose basis is A 0 = {B 1 , B 2 , B 3 }, where B 1 = {neurologist, hospitalX}, B 2 = {cardiologist} and B 3 = {radiologist, researcher}. Once the policy is specified, multi.Encrypt algorithm is executed with the input the set of messages {M 1 , M 2 , M 3 }, A 0 and the respective public keys. Finally, the resulting ciphertext will be stored in the cloud. Refer to the decryption algorithm of dCP-ABE-MAS, only the intended users can decrypt the respective messages. We compare our proposed construction, say Scheme 2 in the context of cloud storage, with the existing schemes [START_REF] Ruj | Privacy Preserving Access Control with Authentication for Securing Data in Clouds[END_REF][START_REF] Yang | DAC-MACS: Effective Data Access Control for Multi-Authority Cloud Storage Systems[END_REF][START_REF] Ruj | DACC: Distributed access control in clouds[END_REF] in Table 3,[START_REF] Bethencourt | Ciphertext-Policy Attribute-Based Encryption[END_REF], where the ciphertext size is considered without signature to make consistent with other schemes. of exponentiations in a group G (or GT , resp.), Pe = number of pairing computations, B G (or B G T ) = bit size of an element of G (or GT , resp.), α = size of LSSS access structure, β = minimum number of attributes required for decryption, γ = number of attributes annotated to a user secret key, k = number of minimal sets in MAS, τ = size of an access structure. Table 1 . 1 Comparison of[START_REF] Lewko | Decentralizing Attribute-Based Encryption[END_REF] with Our (dCP-ABE-MAS) Scheme Key Generation Encryption Decryption Scheme E According to ADV 1 , we can construct an adversary ADV 2 as follows. In Setup, Key Query Phase 1 and Key Query Phase 2, ADV 2 forwards all messages it receives from ADV 1 to the challenger and all messages from the challenger to ADV 1 . In the Challenge phase, ADV 2 receives two messages M 0 and M 1 from ADV 1 and the challenge ciphertext CT * from the challenger. Note that CT * contains C i,1 that is either e(g, g) sib i or e(g, g) δi . Now, ADV 2 flips a random coin ν ∈ {0, 1} and replaces C i,1 by C i,1 = M ν •C i,1 in CT * to compute a modified ciphertext CT and finally sends the resulting CT to the adversary ADV 1 . Theorem 1. The dCP-ABE-MAS scheme is IND-CPA secure against static corruption of authorities in the generic group model. Proof. Let ADV 1 be an adversary who plays the original security game, say GAME 1 , described in Section 3.1. According to GAME 1 , the challenge ciphertext has a component C i,1 which is either M 0 •e(g, g) sib i or M 1 •e(g, g) sib i , and the adversary ADV 1 has to distinguish them. Consequently, we define a modified game, say GAME 2 , as follows. Setup, Key Query Phase 1 and Key Query Phase 2 are similar to GAME 1 , but the challenge ciphertext component C i,1 in Challenge phase is computed as C i,1 = e(g, g) sib i if µ = 1 and C i,1 = e(g, g) δi if µ = 0, where δ i is selected uniformly at random from Z p , and other ciphertext components are computed in the same way analogous to Encrypt algorithm. Then we have the following claim. Claim 1: If ADV 1 has advantage to win GAME 1 , then there is an adversary who wins GAME 2 with advantage at least /2. Proof of Claim 1: Table 3 . 3 Comparison of Computation Costs Key Generation Encryption Decryption Scheme E G E G E G T Pe E G E G T Pe [14] 2γ + 2 4α + 1 1 --O(β) O(β) [12, 13, 16] 2γ 3α 2α + 1 1 -O(β) O(β) Scheme 1,2 2γ 2k k -- - 2 Table 4 . 4 Comparison of Communication Overheads Scheme User Secret Ciphertext Size Access Policy Requirement Key Size of CA [14] (γ + 2)B G (3α + 1)B G + B G T + τ LSSS Yes [12, 13, 16] γB G (2α)B G + (α + 1)B G T + τ LSSS No Scheme 1,2 γB G 2kB G + kB G T + τ any MAS No The description of all the symbols in Table1,3,4 is given at the bottom of Table1. The scheme that works on prime order bilinear group and the security is analyzed in the generic group model. Acknowledgement. The authors would like to thank the anonymous reviewers of this paper for their valuable comments and suggestions.
48,280
[ "1004394", "1003117" ]
[ "301693", "301693" ]
01492919
en
[ "spi" ]
2024/03/04 23:41:50
2014
https://theses.hal.science/tel-01492919/file/These_MOHAMMADI_Ali_UTBM.pdf
Ali Mohammadi Defenced M Ben Ammar Faouzi Rapporteur M Daniel Hissel Examinateur M Seddik Bacha Defenced Joseph Fourier Saint Martin D' Hères M Faouzi Ben Ammar Reviewer M David Bouquain M Abdesslem Djerdir M Davood A Khaburi M Rachid Analysis and Diagnosis of Faults in the PEMFC for Fuel cell Electrical Vehicles ix (1) : IRTES-SET : Institut de Recherche sur les Trasnports, l'Energie et la Société -laboratoire « Systèmes Et Trasports ». (2) : FR FCLAB : Fédération de Recherche Fuel Cell LABoratory. General Introduction In recent years, the Proton Exchange Membrane Fuel cells (PEMFC) have been attracted for transport application. For several years the orientation of the laboratory IRTES-SET (1) program was focused on transportation problems notably in collaboration with FR FCLAB (2) teams within the topic of electrical and hybrid vehicles (EV and Fuel cell electric vehicle (FCEV)). Over the last years in the two laboratories, thesis have treated the problems of electrical vehicle simulation, the drivetrains design, integration and control and devolved fuel cell system, design and control of FCEV. The latter aims of these efforts to obtain at zero emission that is one of the challenges of the scientific researchers in this field. The work of the present thesis is focused on the problem of availability of FCEV drivetrains feed by a Polymer electrolyte membrane fuel cell (PEMFC). The latter is a type of fuel cells being developed for transport applications. Because of their features are includes of low temperature/pressure ranges (50 to 100 °C) and a special polymer electrolyte membrane. However the major problems of using the Fuel cells are currently very expensive to produce. Thus, enhancing the reliability and durability of the PEMFC is the main objective of many researchers. In additional, enhancing the reliability and durability of PEMFC requires a good understanding of the important issues related to operating fuel cells, such as the actual local current density and temperature distributions within a PEMFC. Hence, the present thesis aims to propose a simulation tool able to reach these goals. To carry out these objectives, single cell and stack of fuel cells have to be investigated experimentally in order to establish actual maps of different parameters such as voltage, current density, and temperature. Newton Raphson was used in this work for calibrations and avoid of using expensive current sensors. At the end the ANN has significant was applied to fault isolation and classification. Through this thesis report, the developed work during the last three year, the followed methodologies as well as the obtained results are explained. This report is organized over five chapters as follows: The first chapter presents an overview of the state of the art of electrical and fuel cell electrical vehicles (EVs and FCEVs) over the world and in Belfort. It demonstrates why FCEVs have a long way to browse before fully entering the automotive market. The main locks concern the vehicle availability, safety, cost and societal acceptance. Among the components of the drive train of FCEV explained (PEMFC, Batteries, DC/DC and DC/AC converters and electrical motors), the fuel cell is the most fragile. This is why this research work is focused on enhancing the reliability and durability of PEMFC for automotive applications. x The second chapter is dedicated to the PEMFC modeling and diagnosis. Indeed, a good diagnosis strategy contributes to improve the lifetime of the FC and then to improve the availability of the system built around it, as for example the drivetrain of FCEVs. It has been established that the FC is subject to a lot of fault during its operating. The latter are due to multi-physical phenomenon namely the temperature, the pressure and the humidity of the gas involved within the FC stack and cells. Several models have been developed to understand this phenomenon and to evaluate the FC performances according to different conditions of use but also to detect, isolate and classify the faults when they occur. On the basis of literature, it has been noticed that the ANNs are one of the most interesting in PEMFC fault diagnosis and modeling. In fact, ANNs have the capability to learn and build non-linear mapping of complex systems such as PEMFC. The third chapter introduces new 3D modeling to fault diagnosis in PEMFC with high accuracy. This 3D model is firstly applied on the case of a single cell to present the principle of the methodology used including formulation and calibration based on experimental data. Then the proposed model is extended to one stack. Finally, this 3D circuit model is used for training an ANN model in order to be used for on-line diagnosis of the PEMFC but also in the management of its degraded modes. In the fourth chapter, the experimental work is exposed. This work concerned two set-ups have been developed to validate the proposed 3D model. Because of the difficulty to introduce faults in the FCs without destroying them, only the healthy mode has been focused in this study. The first one concerns one FC cell from MES-DEA technology. The second one is a FC system from Balard technology (called Nexa stack FC). After presenting the two set-ups with their corresponding environmental hardware and software materials, the obtained results are given and commented regarding to the validity of the proposed model. The fifth band the last chapter shows how the ANN method has been used to develop diagnosis based on 3D sensitive models for fault isolation in one cell PEM. The input data of the ANN were analyzed by the FFT method. The ANN advantages consist in their ability to analysis a large quantity of the data and to classify the faults according to their types. This study has been developed in the context of a global strategy of supervision and diagnosis of the drivetrain of a FCEV. Roman Symbols Cell activation area (cm 2 ). Specific reaction surface (cm 2 ). Catalyst surface area per unit. Average water activity. Catalyst specific area (theoretical limit for Pt catalyst is 2400cm 2 mg -1 but state-of-the-art catalyst has about 600-l000 cm 2 mg -1 , which is further reduced by the incorporation of catalyst in the electrode structures by up to 30%). Water activity. Specific heat capacity of the stack (J Mol -1 K -1 ). Surface concentration of the reacting species. Water diffusivity (cm 2 s -1 ). Activation energy, 66kJ Mol -1 for oxygen reduction on Pt. Standard reference potential at standard state (V). Voltage drops that result from losses in the fuel cell. The reverse voltage including the effect of gas pressures and temperature (V). Faraday constant (96485 C Mol -1 ). Flux of reactant per unit area (Mols -1 cm -2 ). Current density (A cm -2 ). Limiting current density (A cm -2 ). Reference exchange current density (at reference temperature and pressure, typically 25°C and 101.25 kPa) per unit catalyst surface area, Acm -2 Pt. Thickness of the membrane (µm). Catalyst loading (state-of-the-art electrodes have 0.3-0.5 mg Pt cm -2 ; lower loadings are possible but would result in lower cell voltages). Molar mass of the membrane (1kg Mol -1 ). Nomenclature xii The total mass of the FC stack (kg). Mass loading per unit area of the cathode. Number of exchanging electrons per mole of reactant (2 for the PEM fuel cell). Number of water molecules accompanying the movement of each proton (2.5). Number of molecules per mole :6.022×10 23 Number of cells in the stack. ̇ Hydrogen Flow rate ( . Gas pressures (Pa). Vapor partial pressure (Pa). Partial pressure of hydrogen (Pa). Partial pressure of oxygen (Pa). Partial pressure of air (Pa). Partial pressure of vapor (Pa). Reactant partial pressure (kPa). Reference pressure (kPa). Vapor saturation pressure (Pa). Electric power produced (w). Total charge transferred. Charge of electron:1.602× -19 (coulombs/electron) ̇ Available power produced due to chemical reaction (J). ̇ Electrical energy produced by FC (J). ̇ Heat loss which is mainly transferred by air convection (J). ̇ Net heat energy generated by the chemical reaction (J). ̇ Sensible and latent heat absorbed during the process (J). Relative humidity of hydrogen and air. Gas constant (8.314 J Mol -1 K -1 ). Equivalent membrane resistance (Ωcm 2 ). Activation losses resistance. Concentration losses resistance. Temperature (K). Tref Reference temperature (298.15 K). Fuel cell stacks voltage (V). ̇ Volumetric flow rate of hydrogen consumption (in standard liters per minute or slpm). Molar volume (lit Mol -1 ) of hydrogen at standard conditions (P=1atm and T=15°C). Molar volume (m 3 mol -1 ). Voltage of double layer effect. Electric work. Greek Symbols  Electron transfer coefficient, 0.5 for the hydrogen fuel cell anode (with two electrons involved) and  = 0.1 to 0.5 for the cathode. Pressure coefficient (0.5 to 1.0). Gibbs free energy (J Mol -1 ). Nomenclature xiii Hydrogen higher heating value (286 ). Enthalpy (KJ Mol -1 ). Activation polarization (V). Concentration polarization (V). Ohmic polarization (V). Entropy (KJ Mol -1 ). Fuel cell efficiency (%). Membrane water content. Specific resistivity of the membrane for the electrons flow (Ω. Cm). Dry density (0.00197kg cm -3 ). Conductivity in units (S cm -1 List of Tables xxi List of Tables Chapter I: Chapter I State of the Art of Fuel Cell Electrical Vehicles (FCEV) Introduction In recent decades, increasing production of internal combustion engine vehicles (IC) has caused severe problems for the environment and human life. Global warming, air pollution and quickly decrease in fossil fuel resources are now the principal problem in this regard. As a result, electric vehicle are considered more clean and safe transportation system. Plug in electric vehicles, hybrid electric vehicles (HEVs), and fuel cell vehicles (FCEV) have been typically suggested to replace combustion vehicles in the future [1.1]. Today all IC has been never ideal because of its fuel consumption and produced pollution such as, carbon monoxide, nitrogen oxides and other toxic substances. Furthermore, Global warming is the results of the "greenhouse effect" derived by the presences of carbon dioxide and other gases. These gases act as barriers on Sun's infrared radiation reflected to sky thus, temperature increase during this period. The distributions of electrical powers in different categories of human activity are shown in Figure .1.1 [1.2]. Revolutionized the world of electronics and electricity was occurred by the Thyristor. The most important electrical vehicles which used in Apollo by astronauts were called Lunar Roving Vehicle. History of electrical vehicles The modern electric vehicle peaked during the 1980s and early 1990s. One of the weak points in the development of electric vehicles to the market was the energy storage capacity of the battery. Consequently, in recent years, hybrid electric vehicles have been replaced by the electrical vehicles [1.3]. Pieper institutions of Liège, Belgium and by the Vendovelli and Priestly Electric Carriage Company of France built the first hybrid electric vehicle. The Pieper vehicle was a parallel hybrid composed by gasoline engine, the lead-acid batteries and electric motor. The basic series hybrid vehicle was derived from a pure electric vehicle. It was constructed by the French company Vendovelli and Priestly. The Lohner-Porsche vehicle of 1903 that used the magnetic clutches and magnetic couplings (regeneration braking). In 1997, Toyota released the Prius sedan that it was more important and commercialized of hybrid electrical vehicles built by Japanese manufacturers [1.3]. Brief history of Fuel Cells electric vehicles in world:  In 1958 General Electric (GE) chemist Leonard Niedrach devised a way of depositing platinum onto the ion-exchange membrane created by fellow GE scientist Willard Thomas Grubb three years earlier. This marked the beginning of PEMFC used in vehicles today. The technology was initially developed by GE and NASA for the Gemini space program; it took several decades to become viable for demonstration in cars, primarily due to cost [1.4].  In 1959 the Allis-Chalmers tractor was a farm tractor powered by an alkaline fuel cell with a 15 KW output, capable of pulling weights up to 1360 kg.  In 1966 General Motors designed the fuel cell Electro van, to demonstrate the viability of electric mobility. The Electro van was a converted Handivan with a 32 KW fuel cell system giving a top speed of 115 kmph and a range of around 240 kilometers [1.4].  In 1970 based on the Austin A 40, the K.Kordesch utilized 6 KW Alkaline fuel cell and was comparable in power to conventional cars on the road the time [1.4].  In 1993 the Energy Partners Consulier was a proof of concept vehicle that sported a lightweight plastic body and three 15 KW fuel cells in an open configuration; it had a top speed of 95 kmph and a range 95 kilometers [1.4].  In 1994 the NECAR (New Electric Car) was Dimler's first demonstration of fuel cell mobility. A converted MB-180 van, it utilized a 50 kW PEMFC that, alongside compressed hydrogen storage, took up the majority of space in the van [1.4].  In 1997 within a year Daimler, Toyota, Renault and Mazda all demonstrated viable fuel cell passenger vehicle concept. Fuel cells range from 20 KW (Mazda) to 50 KW (Daimler); both the NECAR 3 and FCHV-2 used methanol as fuel instead of hydrogen. The next year GM demonstrated a methanol-fuelled 50 KW fuel cell Opel Zafira-the first publicly drivable concept [1.4].  Between 1998-2000 during this period momentum was growing for the commercial viability of fuel cell vehicles and most of the world's major automakers (including Daimler, Honda, Nissan, Ford, Volkswagen, BMW, Peugot and Hyundai) demonstrated FCEV with varying fuel sources (methanol, liquid and compressed gaseous hydrogen) and storage methods [1.4].  The public attention on FCEV peaked in 2000. At this point a realization came that despite the promise of the technology; it was not ready for market introduction. Attention switched to hybrid electric power trains and Battery electric vehicles BEV as technologies that might deliver smaller, nearer-term benefits. The public focus for fuel cell transport shifted from cars to buses [1.4].  2005-2006 saw the unveiling of two are that continue to have an impact on the FCEV market today: the first generation edition of the Daimler F-CELL B class in 2005 and the next generation Honda FCX concept in 2006 [1.4].  In 2008 a fleet of twenty Volkswagen Passat Lingyu FCEV was used for transporting dignitaries at the 2008 Beijing Olympics [1.4].  On 8th September 2009 seven of the world's largest automakers -Daimler, Ford, General Motors, Honda, Hyundai-Kia, Renault-Nissan and Toyota -gathered to sign a joint letter of understanding. Addressed to the oil and energy industries and government organizations, it signaled their intent to commercialize a significant number of fuel cell vehicles from 2015 [1.4]. Daimler: Daimler has a long history of fuel cell activity, spearheading the development of PEMFC for automotive use with its 1994 NECAR. The company remained active in the years after, producing four further variants of the NECAR before revealing its first-generation fuel cell passenger vehicle, the A-Class F-CELL, in 2002. Its second-generation vehicle, the B-Class F-CELL (see Figure .1.3) entered limited series production in late 2010 offering improvements in range, mileage, durability, power and top speed. A fleet total of 200 vehicles are now in operation across the world, including more than 35 in a Californian lease scheme [1.4]. Brief history of fuel cell electric vehicles in France: In July 2013 the Mobilité Hydrogène France consortium officially launched with twenty members including gas production and storage companies, energy utilities and government departments. The group is co-funded by the consortium members and the HIT project. It aims to formulate an economically competitive deployment plan for a private and public hydrogen refueling infrastructure in France between 2015 and 2030, including an analysis of cost-effectiveness. Initial deployment scenarios for vehicles and stations will be published in late 2013 [1.4]. ECCE The F-CITY H 2 The F-City H 2 -a battery-electric vehicle with a fuel cell range extender-has become the first urban electric vehicle with such an energy pack to be homologated in France [1.6] MobyPost vehicle MobyPost is a European project aimed at developing a sustainable mobility concept by delivering a solar-to-wheel solution. The first core element of this environmentally friendly and novel project is the development of ten electric vehicles, which will be powered by hydrogen cells, conceived and designed (for post-delivery use). Besides, the development of two hydrogen production and refueling stations is a second core component of MobyPost. These will be built in the French region Franche-Comté, where photovoltaic (PV) generators will be installed on the roofs of two buildings owned by the project partner La Poste and dedicated to postal services. The PV generators allow for the production of hydrogen through electrolysis. Hydrogen is stored on site in low pressure tanks where it is available for refueling the tanks of the electric vehicles, the latter being powered by an embedded fuel cell producing electricity that directly feeds the electric motors. Figure.1.12. Mobypost vehicle The MobyPost vehicle is designed to be ergonomic for postal activities and small enough for very narrow streets. It carries about 100 kg of mail, more than twice as much as the postal motor scooters it's intended to replace. With four wheels, it's more stable than a scooter, especially in snow. Its windshield and roof provide some shelter in bad weather, but it has no doors to get in the way driver's way as he goes in and out making deliveries. With 300g of embedded hydrogen the postmen can do their daily tours (around 40km) at a maximum speed of 45km/H of the Mobypost vehicle [1.7]. Configuration of FCEV From a structural viewpoint, an FCV can be considered a type of series hybrid vehicle in which the fuel cell acts as an electrical generator that uses hydrogen. The on-board fuel cell produces electricity, which either is used to provide power to the machine or is stored in the battery or the super capacitor bank for future use. Various topologies can be introduced by combining energy sources with different characteristics [1.8]. Passive cascade battery/UC system The battery pack is directly paralleled with the Ultra capacitor (UC) bank. A bidirectional converter interfaces the UC and the dc link, controlling power flow in/out of the UC, as shown in Figure .1.13. Despite wide voltage variation across UC terminals, the dc-link voltage can remain constant due to regulation of the dc converter. However, in this topology, the battery voltage is always the same with the UC voltage due to the lack of interfacing control between the battery and the UC. The battery current must charge the UC and provide power to the load side [1.9]. Active cascaded battery/UC system. The passive cascaded topology can be improved by adding a dc/dc converter between the battery pack and the UC, as shown in Figure .1.14 this configuration is called an active cascaded system. The battery voltage is boosted to a higher level; thus, a smaller sized battery can be selected to reduce cost. In addition, the battery current can more efficiently be controlled compared with the passive connection [1.9]. The voltages of the battery and the UC will be leveled up when the drive train demands power and stepped down for recharging conditions. Power flow directions in/out of the battery and the UC can separately be controlled, allowing flexibility for power management. However, if two dc/dc converters can be integrated, the cost, size, and complexity of control can be reduced [1.9]. Multiple-input battery/UC system Both the battery and the UC are connected to one common inductor by parallel switches in the multiple-input bidirectional converter shown in Figure .1.16 Each switch is aired with a diode, which is designed to avoid short circuit between the battery and the UC. Power flow between inputs and loads is managed by bidirectional dc/dc converters. Both input voltages are lower than the dc-link voltage; thus, the converter works in boost mode when the input sources supply energy to drive loads and in the buck mode for recovering braking energy to recharge the battery and the UC. Only one inductor is needed, even if more inputs are added into the system. However, the controlling strategy and power-flow management of the system are more complicated [1.9]. hybrid energy-storage system Hybrid topology, where a higher voltage UC is directly connected to the dc link to supply the peak power demand, is demonstrated in Figure .1.17, a lower voltage battery is interfaced by a power diode or a controlled switch with the dc link. This topology can be operated in four modes of low power, high power, braking, and acceleration. For light duty, the UC mainly supplies the load, and the battery will switch on when the power demand goes higher. Regenerative energy can directly be injected into the UC for fast charging or into both the battery and the UC for a deep charge [1.9]. Fuel cell History of fuel cell The father of the fuel cell is Sir William Grove in 1839, discovered the possibility of generating electricity by reversing the electrolysis of water. Francis Bacon developed the first successful fuel cell device in 1932, with a hydrogen-oxygen cell using alkaline electrolytes and nickel electrodes. After the NASA space mission in 1950 Fuel cell now have a main role in space programs [1.3]. Also awarded the contract was for the Gemini space mission in 1962. The 1kw Gemini FC system had a platinum loading of 35 mg pt/cm 2 and performance of 37 mA/cm 2 at .78 V. In the 1960 improvements were made by incorporating Teflon in the catalyst layer directly adjacent to the electrolyte, as was done with a GE fuel cell at the time Different type of fuel cell The main different types of fuel cell based on the electrolytes and/or fuel with practical fuel cell type as follows: 1. Polymer electrolyte membrane fuel cells (PEMFCs) PEMFC advantages and drawbacks The PEM Fuel cell has the ability to develop high power density. The application in vehicles and stationary is considerable. One of the important advantages operates at low temperature between 60-80 °C. Moreover, have faster startup and immediate response to instantaneous loads [1.12]. Advantages The advantages of PEM fuel cells are that they: 1) Are tolerant of carbon dioxide. As a result, PEM fuel cells can use unscrubbed air as an oxidant, and reformate as fuel; 2) Operate at low temperatures. This simplifies material issues, provides for quick startup and increases safety; 3) Use a solid, dry electrolyte. This eliminates liquid handling, electrolyte migration and electrolyte replenishment problems; 4) Use a non-corrosive electrolyte. Pure water operation minimizes corrosion problems and improves safety; 5) Have high voltage, current and power density ; 6) Operate at low pressure which increases safety; 7) Have good tolerance to differential reactant gas pressures; 8) Are compact and rugged; 9) Have a relatively simple mechanical design; 10) Use stable materials of construction. The disadvantages The disadvantages of PEM fuel cells are that they: 1) Can tolerate only about 50 ppm carbon monoxide; 2) Can tolerate only a few ppm of total sulfur compounds; 3) Need reactant gas humidification; Humidification is energy intensive and increases the complexity of the system. The use of water to humidify the gases limits the operating temperature of the fuel cell to less than water's boiling point and therefore decreases the potential for co-generation applications. 4) Use an expensive platinum catalyst; 5) Use an expensive membrane that is difficult to work with. Functional components of the cell: Cell components can be separated into four sections: 1) Ion exchange membrane; 2) Electrically conductive porous backing layer; 3) Catalyst layer that (the electrodes); 4) Cell plate. As shown in Figure .1.21 the structure of PEM FC in different parts for one cell [1.11]. Membrane The material used is a family perfluorosulfonic acid that the commoner is used by nation (see Figure.1.22). The bulk of the Polymer is fluorinated giving a hydrophobic character. However, in the most part of the membrane, there are sulfonic acid sites which determine the ionic conductivity and have property of hydrophilic [1.11]. Electro-catalyst Layer This layer sits between membrane and a backing layer. The catalyst consists of two electrode anode and cathode that made of platinum. To increase hydrogen oxidant in anode side and oxygen reduction in cathode side are used pure platinum metal catalyst or support platinum catalyst [1.11]. Porous Backing Layer The membrane is surrounded between two porous layers. The quality of the backing layer is typically carbon based. Hydrophobic material will be used in this layer due to, prevent water to gases freely contact the catalyst layer (see Figure.1.23). Performance of backing layer as follows: 1) Act as a gas diffuser; 2) Mechanical support; 3) Electrical conductivity of electron. . Bipolar Plate for Fuel Cell The main task of BPs has to be collected and conduct current from the anode and cathode to another cell or external circuit. In addition, it applied to carry through a cooling system (seeFigure. 1.24). The material of BP will be used must accompany these conditions: 1) The PB must be thin due to, minimum stack volume; 2) It must be light because of stack weight; 3) It must be corrosion resistance in the face of acid electrolyte, oxygen, hydrogen; heat and humidity; 4) It must be reasonably stiff; flexural strength [1.13]. A fuel cell system generally includes a stack and needs a lot of auxiliary equipment to provide the supply of hydrogen and oxygen, the compression and humidification of the gas (e.g. Air compressor), cooling of the packaging through the electric power converters and power control system. A general diagram of the fuel cell system is given in Figure. Hydrogen supply A great majority of fuel cell uses hydrogen as fuel. Hydrogen can be provided either by a hydrogen tank or from external reformer hydrogen. The purpose of a reformer increases the complexity of the fuel cell system, due to processing or recovery of the heat from the reforming process and removal of the product gas must be properly handled. At the output of the reformer, hydrogen is not the only gas to be produced. Indeed, other gases such as carbon dioxide (CO 2 ), carbon monoxide (CO), and sulfur (S), can be produced simultaneously in the reformer. For some technologies, fuel cell (PEMFC, AFC), carbon monoxide and sulfur are considered poison gas. In conclusion, a filtering process gas must be added between the reformer and the fuel cell. The hydrogen in the tank can either be stored under high pressure between 350 and 700 bars (the hydrogen volume decrease by increasing the gas pressure according to the physical law of Boyle-Mariotte), liquid form or in metal cylinders. Before the gas goes into the fuel cell, a pressure regulator should control the hydrogen pressure. Subsystem supply oxygen (air) The air supply in the cathode of the fuel cell is usually compressed using an air compressor. In some applications (e.g. PEM fuel cells), air is humidified before entering into the fuel cell. Depending on the fuel cell technology (pressure, temperature), the air can be compressed by a motor-compressor or a turbine. A heat exchanger can also be added in the air supply system in order to preheat the air. In some applications, the fuel cell can be powered by storing compressed form pure oxygen. The use of pure oxygen can significantly increase the performance of the fuel cell and get rid of the air compressor, which has a unit of energy reduced the energy efficiency of the fuel cell system. Cooling system As mentioned before, the electrochemical reaction occurring within the fuel cell generates heat. It must be removed to maintain a constant operating temperature of the fuel cell. For fuel cells, low power, natural convection at the surface of the cell or the cooling fan (forced convection) is sufficient to get rid of the heat. In the case of fuel cells of high power, the cooling air is not sufficient to transfer heat. Hence, complex cooling systems, such as cooling water must be used. In fuel cells at high temperatures, the heat removed from the fuel cell applications to be utilized again for purposes co-generation, thus forming a system commonly known as Combined Heat and Power (CHP). Power converters The output voltage of the fuel cell varies depending on the supplied electric current (polarization curve). To maintain a constant output voltage, power converters are used as an interface between the fuel cell and the load. Sub-control system As discussed above, the fuel cell needs a large number of auxiliary equipment. In order to insure proper functioning of the system in terms of performance and safety, it is necessary to have a control system to oversee the various subsystems. A subsystem control well designed enables the fuel cell will operate in the best conditions. Diagnosis of PEMFC Fault diagnosis consists of three levels (fault detection, Isolation and analysis), accumulation dates from system, fault diagnosis and fault classification that is explained as following: 1) Accumulation dates: for this purpose, different way proposed specific electrochemical impedance spectroscopy (EIS), linear sweep Voltammetry (LSV), cyclic Voltammetry (CV), etc. which to carry out of the variation of output based on different operating conditions of input; 2) Extract fault from healthy mode: Depends on the fault extract from the original data of system miscellaneous way such as, FFT, WT and STFT; 3) Faults classification: At this stage, the following methods such as NN, FL, Neural-fuzzy and BN applied more than another. Various diagnostic tools employed in the characterization and determination of fuel cell performances are summarized into two general categories: 1) Electrochemical techniques. 2) Physical/chemical methods [1.15]. Batteries Among the available choices of portable energy sources, batteries have been the most popular choice of energy source for EVs since the beginning of research and development programs in these vehicles. The EVs and HEVs commercially available today that use batteries as the electrical energy source. The various batteries are usually compared in terms of descriptors, such as specific energy, specific power, operating life, etc. Similar to specific energy, specific power is the power available per unit mass from the source. The operating life of a battery is the number of deep discharge cycles obtainable in its lifetime or the number of service years expected in a certain application. The desirable features of batteries for EV and HEV applications are high specific power, high specific energy, the high charge acceptance rate for recharging and regenerative braking, and long calendar and cycle life. Additional technical issues include methods and designs to balance the battery segments or packs electrically and thermally, accurate techniques to determine a battery's state of charge, and recycling facilities of battery components. Above all, the cost of batteries must be reasonable for EVs and HEVs be commercially viable [1.16]. The major types of rechargeable batteries considered for EV and HEV applications are: 1) Lead-acid (Pb-acid);  DC/DC Converters: Power electronic converters which change the level of DC source to a different level of DC, keeping regulation in consideration, are known as DC/DC [1.16].  DC/AC Inverters: Generally, the single-phase, full bridge DC/AC inverters are popularly known as "H-Bridge" inverters. These DC/AC inverters are basically either voltages source/fed inverters (VSI) or current source/fed inverters (CSI). In the case of a VSI, the input voltage is considered to remain constant, whereas in a CSI, the input current is assumed to be constant [1.16]. Topologies of voltage source inverter (VSI), current source inverter (CSI), Z-source inverter (ZSI), and soft switching inverter can be used in traction drives. Electric motors Electric machines can be utilized in either the motoring mode or the generating mode of operation. In the motoring mode, these machines use electricity to drive mechanical loads, while, in the generating mode, they are used to generate electricity from mechanical prime movers. The motor is the main component of the drive train of an EV. In addition, nowadays the electric motor is also widely used in ancillary devices of car such as power steering, air conditioning, windows up ... etc. Challenges of FCEV Several challenges must be overcome before fuel cell vehicles (FCVs) will be a successful competitive alternative for consumers [1.17]. These challenges concern the hydrogen and fuel cell technology, the cost but also the societal acceptance. Onboard Hydrogen Storage Some FCEVs store enough hydrogen to travel as far as gasoline vehicles between fill-ups-about 300 miles-but the storage systems are still too large, heavy, and expensive. FCVs are more energy efficient than conventional cars, and hydrogen contains three times more energy per weight than gasoline does. However, hydrogen gas contains only a third of the energy per volume gasoline has, making it difficult to store enough hydrogen to go as far as a gasoline vehicle on a full tank-at least within the same size, weight, and cost constraints [1.17]. Vehicle Cost FCEVs are currently too expensive to compete with hybrids and conventional gasoline and diesel vehicles. But costs have decreased significantly and are approaching DOE's goal for 2017 (see graph 1.26). Manufacturers must bring down production costs, especially the costs of the fuel cell stack and hydrogen storage [1.17]. Fuel Cell Durability and Reliability Fuel cell systems are not yet as durable as internal combustion engines and do not perform as well in extreme environments, such as in sub-freezing temperatures. Fuel cell stack durability in real-world environments is currently about half of what is needed for commercialization. Durability has increased substantially over the past few years from 29,000 miles to 75,000 miles, but experts believe a 150,000-mile expected lifetime is necessary for FCEVs to compete with gasoline vehicles [1.18]. Getting Hydrogen to Consumers The extensive system used to deliver gasoline from refineries to local filling stations cannot be used for hydrogen. New facilities and systems must be constructed for producing, transporting, and dispensing hydrogen to consumers [1.17]. Competition with Other Technologies Manufacturers are still improving the efficiency of gasoline-and diesel-powered engines, hybrids are gaining popularity, and advances in battery technology are making plug-in hybrids and electric vehicles more attractive. FCVs will have to offer consumers a viable alternative, especially in terms of performance, durability, and cost, to survive in this ultra-competitive market [1.17]. Safety Hydrogen, like any fuel, has safety risks and must be handled with caution. We are familiar with gasoline, but handling compressed hydrogen will be new to most of us. Therefore, developers must optimize new fuel storage and delivery systems for safe everyday use, and consumers must become familiar with hydrogen's properties and risks [1.17]. Page | 25 (1) : IRTES-SET : Institut de Recherche sur les Trasnports, l'Energie et la Société -laboratoire « Systèmes Et Trasports ». (2) : FR FCLAB : Fédération de Recherche Fuel Cell LABoratory. Public Acceptance Fuel cell and hydrogen technology must be embraced by consumers before its benefits can be realized. Consumers may have concerns about the dependability and safety of these vehicles, just as they did with hybrids [1.17]. Thesis objective In recent years, the Proton Exchange Membrane Fuel cells (PEMFC) have been attracted for transport application. For several years the orientation of the laboratory IRTES-SET (1) program was focused on transportation problems notably in collaboration with FR FCLAB (2) teams within the topic of electrical and hybrid vehicles (EV and Fuel cell electric vehicle (FCEV)). Over the last years in the two laboratories, thesis have treated the problems of electrical vehicle simulation, the drivetrains design, integration and control and devolved fuel cell system, design and control of FCEV. The latter aims of these efforts are to obtain at zero emission that is one of the challenges of the scientific researchers in this field. The PEMFC is a very complex device in terms of phenomenon involved in its operation which is multi-physical. Electricity, chemistry, fluidics, thermodynamics and mechanics are domains of physics which are involved to conduct this kind of study. To overcome these difficulties an electric network approach will be used notably for easily taking into account the three space dimensions of the PEMFC stack. The resulted 3D model has to be able to simulate faults to develop an efficient algorithm for faults isolation and classification on the PEMFC. According this model faults can be localized in each point of a single cell. This is helpful for optimization and control of operating conditions. However, the local current density and temperature distributions within a single PEMFC as well as between the fuel cells in a fuel cell stack still require more attention in both the experimental and numerical investigations for better understanding. Therefore, the overall goal of the present thesis is to conduct an experimental analysis, with emphasis on both voltage and temperature distributions inside a PEMFC with different operating conditions for stack and single cell. To carry out these objectives, single cell and stack of fuel cells have to be investigated experimentally in order to establish actual maps of different parameters such as voltage, current density, and temperature. Conclusion This chapter has presented an overview of the state of the art of FCEVs over the world and in Belfort. It has been established that the FCEVs have a long way to browse before fully entering the automotive market. The main locks concern the vehicle availability, safety, cost and societal acceptance. Among the components of the drive train of FCEV explained (PEMFC, Batteries, DC/DC and DC/AC converters and electrical motors), the fuel cell is the most fragile. This is why this research work is focused on enhancing the reliability and durability of PEMFC for automotive applications. The chosen way is the good understanding of the important issues related to fuel cell operation. This problematic is detailed in the next chapter, through modeling, simulation and knowledge coming from literature but also developed, locally. Chapter II PEM Fuel Cell Modeling and Diagnosis PEMFC modeling Over the past decade, many proton exchange membrane fuel cell models have been reported [2.1]- [2.5]. Models play an important role in FC development since they facilitate a better understanding of parameters affecting the performance of FC. The models normally focus on one aspect or region of the fuel cell. PEMFC stack is one of the most studied parts in the fuel cell. Generally the stack modeling is divided in three main groups: 1) Empirical/semi empirical model; 2) Mechanistic model; 3) Analytical model. Empirical model The semi empirical model is combined theoretical and algebraic equations with empirically formulas. Empirical models are used when the physical phenomena are difficult to model or the theory rule of the phenomena is not well understood. • Springer et al [2.6] developed a semi empirical model for a FC with partially hydrated membrane. • Amphlette et al [2.7] use semi-empirical relationships to estimate the potential losses and to fit coefficients in a formula. The goal is to predict the cell voltage with the operating current density. This model accounted for activation and ohmic over potential. The partial pressure and dissolved concentration of hydrogen and oxygen were determined empirically as a function of temperature, current density and gas channel mole frictions. • Pisani et al [2.8] also use a semi empirical approach to study the activation and ohmic losses as well as transport limitations at the cathode reactive region. • Maggio et al. [2.9] used a semi empirical model for water transport in a FC. The model concentrations over potential have been affected by allowing the cathode gas porosity to be an empirical function of current density. The effective gas porosity was assumed to decrease linearly with increasing current density. This is due to the increasing percentage of gas. Therefore, the result indicated of dehydration of the membrane is likely to occur on the cathode side than in anode side. • Chan et al.[2.10] studied the effect of CO kinetics in the hydrogen feed on the anode reactive region. When hydrogen is obtained from fuel, there are trace amounts of CO present which act as poison to the platinum catalyst and its cause is decreased in the catalyst surface area. The fraction of catalyst an empirical factor was determined by the fraction of catalyst occupied by CO at anode sites. • Maxoulis at al. [2.11] used empirical model in stack FC. They combined the model of Amphelette et al [2.7] with a commercial software ADVISOR, which was used for driving cycle. They studied the effects of the number of cells per stack, electrode kinetics and water concentration in the membrane on the fuel consumption. They obtained that a large number of cells per stack make greater stack efficiency resulting in better economy. The drawback of semi empirical cannot accurately predict performance outside of the range. However, it is very useful for quick prediction for design. They cannot predict the performance or the response of the fuel cell. Generally empirical and semi empirical model are divided as follow: 1.1.1. Design of experiment (DoE) modeling DoE approach aims to design or to characterize FC stack. FCs experimental is generally long and expensive. In addition, there are complex interrelations between physical parameters to make test and implemented. Many aspects and tools of DoE methodology can be of great benefit for various scientific and technological purposes such as: Development of FC materials, components and ancillaries, analysis and improvement of single cell and FC stack performance, evaluation and development of a complete FC system [2.12]. Artificial neural network (ANN) These models are based on a set of easily measurable inputs like temperature, pressure, and current. They are able to predict the output voltage of FC stacks. In order to give more relevance to the time dependence of an output, the feedback loops were designed to provide different time states of the output. Nevertheless, the main drawback of this model is the huge number of experimental tests [2.12]. Modeling approach based on electrical analogies Unlike in DoE and ANN modeling approach, certain knowledge of the behavior of the stack is needed but a few internal parameters of the stack are applied to tune the parameters. Then electric modeling is proposed. In this method the basic idea is to find a common way to represent the different aspect in the stack FC such as different physical laws, thermal and fluidic mode [2.12]. Equivalent electric circuit model In recent years, many researchers have widely investigated the dynamic modeling of the FC with emphasis on electrical terminal characteristics [2.13]- [2.17]. A detail explanation electrochemical property of the FC and simple equivalent circuit including the dynamic effect are reported in [2.18]. The electric model is a simple method to implement the system. The simple electric model represents in Figure .2.1 [2.19]. More complex models are sophisticated were illustrated in Figure. Modeling approach based on energy analogies This kind of approach is involved in a great number of fields of physics trough an energy approach that was carried by Band Graph modeling. The Bond graph is an explicit graphical unified formalism where the energy exchanges within an energy system are described by bonds which represent the power exchanges. A limitation of this model is necessary to describe the majority of energy systems [2.12]. Within the same scope Energetic Macroscopic Representation (EMR) is identified as the best energy modeling methodology applied to chemical reactions and mass transfer. Mechanistic model In this model, the phenomena internal of the cell are introduced by differential and algebraic equations based on physics and electro chemistry law. These equations are solved using adequate computational methods. The equations describe the electrochemical reaction, mass and charge transfer. Indeed, the require of accuracy water management, the dehydration of membrane, the complex electrode kinetics, the mass transport and the slow rate of oxygen reduction are limiting factors on the FC modeling. Assuming parameters caused different level complexity of models from one dimension to three dimensions. However, resolutions of complex model lead to complex calculations. This mode can be subcategorized as multi domain models or signal domain models. Multi domain approach involved the derivation of different sets of equations for each region of the FC (e.g. anode, cathode gas diffusion regions and catalyst layers). This model depends on three basic phenomenology equations such as Butler-Volmer equation for FC voltage, the Stefan-Maxwell equation for transport phenomena and Nernst-Plank equation for species transport [2.12]. However, Gurau et al. [2.20] are approved that since governing differential equations in the gas flow channels and gas diffusion electrodes are similar, the equations can be combined for both regions. Mechanistic model (signal and multi domain) has been utilized to study a wide range of phenomena including polarization effect, water management, thermal management, CO kinetic, catalyst and flow filed geometry. Mechanistic approaches can be the simulation of transient and steady state response. Moreover, it used to elaborate equivalent circuit models. The disadvantages of these models are how to understand their physical behaviors and improve their performances of the multi physic and multi scale of the FC stack. In addition, various skills and knowledge are needed such as chemistry, electrochemistry, fluid mechanics, thermal, electrical and mechanical engineering. Analytical model In analytical model, many assumptions were made concerning variable profiles within cells in order to approximate analytical voltage versus current density relationship but do not give an accurate transport processes occurring within cells. They are limited to predicting voltage losses and water management. It is useful for quick calculations in simple models [2.12]. Consideration of different modeling Over the past decades a wide range research of steady state models of varying complexity and dimensionality, has been developed to simulate PEMFC performance. These models included 1D models (where the spatial dimension is parallel to flow of current), 2D models (where the planes considered are perpendicular to the cell plates), and more 3D that is complex models (explained more in chapter III). Springer et al [2.6] presented a 1D model for a well humidified PEMFC, which considered activation and consideration losses in the active layer and gas transport in the cathode GDL. They detected that overlook losses in well humidity H 2 /O 2 cells that could be well described by the sum of the high frequency (membrane and contact resistance) and activation losses on cathode side. Bernardi and Verbrugge developed a similar model using the Nernst Planck equation, the Butler-Volmer equation and the Stefan-Maxwell equation. The results created a typical model for calculation of contributions to the FC losses (no mass transport limitation). One of the important things to balance between accuracy and calculation time is usually maintained by a number of assumptions. Considerations of these assumptions lead to increasing model complexity and require a more detailed in the physical model, particularly in the porous active layer. Fuel cell basic characteristics Fuel cells are an electrochemical device that converts chemical energy to hydrogen and oxygen in anode and cathode sides into electricity, heat and water. The basic PEM fuel cell reactions are: Anode: Cathode: Overall Cells: The stoichiometry of each reactant gas is an important experimental parameter in FC. A 1:1 stoichiometry refers to the flow rate required to maintain a constant reactant concentration at the electrode at a fixed current density. Usually a higher stoichiometry is required at the cathode side (typically 3-4) than in anode side (typically 1-2) due to sluggish mass transport rate of oxygen [2.21]. The heat (or enthalpy) of a chemical reaction is calculated by the difference between the heat of production and reaction in the cell: Eq2.1 Heat of liquid water is -286 kJ.mol -1 (at 25°C) and heat of H 2 and O 2 are zero we can obtain that enthalpy at 25 °C is equal to -286 kJ.mol -1 Note that negative sign for enthalpy of chemical reaction means heat is being released in the reaction. 286kJ.mol -1 namely the hydrogen's heating value. However, because in every chemical reaction some entropy is produced not all this value can be converted into useful work. The portion of the enthalpy can convert to electricity in fuel cells could be obtained by Gibbs-Free energy law as follow equation: Eq2.2 Indeed, there are some losses in energy conversion due to entropy . As well, as for the reaction obtained from the difference between the heat of formation of products and reactants. Eq2.3 Heat of liquid water is -241.98 kjmol -1 (at 25°C) and heat of H 2 and O 2 are zero we can obtain that enthalpy at 25 °C is equal to -241.98 kJmol -1 . The total charge transferred in a fuel cell reaction can be obtained by: The maximum amount of electrical energy generated in a fuel cell is: Eq2.6 The theoretical potential of fuel cell is: Eq2.7 That is to say at 25°C the theoretical hydrogen/oxygen fuel cell potential is 1.23 Volts. Effect of temperature The theoretical cell potential, this motion changes with temperature: ( ) Eq2.8 Hence, the temperature increasing in a cell leads to a lower theoretical cell potential. Besides, both and are functions of temperature: ∫ Eq2.9 ∫ Eq2.10 Specific heat energy, C p for any gas is also a function of temperature. An empirical relationship may be used as the following equation: Eq2.11 Where a, b and c are the empirical coefficients, different for each gas [2.21]. In fact, the voltage losses in operating condition where the FC temperature decreases allow remedying of the loss of theoretical voltage cell. Figure .2.3 displays the voltage Nernst of cell diminution by increasing the temperature. . Effect of Pressure Gas partial pressure has an important effect on membrane chemical degradation. In an operating fuel cell relating to the change, Gibbs free energy can be obtained by: Eq2.12 After integration and consideration of hydrogen /oxygen fuel cell reaction, the Nernst equation becomes as follows: ( ) Eq2. 13 Then: ( ) Eq2.14 Therefore, by Eq.2.14 cell potential is a function of temperature. In addition, as following equation cell potential (depended to pressure) can be obtained by: ( ) Eq2.15 Neglecting the changes of ΔH and ΔS with temperature (have a very small error in temperature below 100 °C) the Eq.2.15 become [2.7]: Eq2.16 The partial pressure of the P O2 and P H2 at the cathode and anode side are calculated by equations [2.22]: Eq2.17 Theoretical FC Efficiency: In the case of FC, the useful energy output is the electrical energy produced, and energy input is the enthalpy of hydrogen. Figure .2.5 illustrates the energy inputs and output for FC. Figure.2.5. Energy inputs and output for FC as an energy conversion device. The FC efficiency expresses as: The ideal efficiency decreases with temperature. Specifically, at 60°C the efficiency of FC becomes reduced: Eq2.20 Fuel Cell voltage losses Electrochemical reactions consist of a transfer of electrical charge and change in Gibbs energy [2.18]. Current density is the current (electron or ions) per unit area of the surface. By Faraday's, law the current density as follows: Eq2.21 nF: the charge transferred (Coulombs Mol -1 ) j: the flux of reactant per unit area (Mols -1 cm -2 ) In general, an electrochemical reaction consists, either oxidation or reduction as follows: Eq2.22 Eq2.23 In forward and backward reaction, the flux is specifies by: Eq2.24 Eq2.25 k f = rate coefficient. k b = backward reaction. C ox , C Rd = surface concentration of the reacting species. The net current density is generated between the released and consumed electrons and is defined by: Eq2.26 By using the transition state theory, to calculate of the net current density equation can be rewritten by: [ ] [ ] Eq2.27 At equilibrium, the potential is E r , and the net current is equal to zero, although the reaction moves in both directions simultaneously. The rate at which these reactions proceed at equilibrium is called the exchange current density [ ] [ ] Eq2.28 By comparing two equations, a relation between current density and potential is calculated as follows: [ ] [ ] Eq2.29 Exchange Current Density Exchange current density is not constant in chemical reactions. Because, based on equation 2.27 it is a function of temperature. In addition, also it is a function of electrode catalyst loading and catalyst specific surface area. The relative exchange current density at any temperature and pressure is specified by the equation below: ( ) [ ( )] Eq2.30 Where [2.24] ( ) Eq2.31 : is the reversible or equilibrium potential = reference exchange current density (at reference temperature and pressure, typically 25°C and 101.25 kPa) per unit catalyst surface area, , = catalyst specific area (theoretical limit for Pt catalyst is 2400 But state-of-the-art catalyst has about 600-1000 , which is further diluted by the incorporation of catalyst in the electrode structures by up to 30%). = catalyst loading (state-of-the-art electrodes have 0.3-0.5 mgPt ; lower loadings are possible but would result in lower cell voltages). = reactant partial pressure, kPa = reference pressure, kPa = pressure coefficient (0.5 to 1.0) = activation energy, 66kJmol for oxygen reduction on Pt [8] R = gas constant, 8.314 T = temperature, K = reference temperature, 298.15 K = 0.5 for the hydrogen fuel cell anode (with two electrons involved) and 0.1 to 0.5 for the cathode [2.18]. If the exchange current density is high, the surface of electrodes will be more active. The over potential on the cathode is much bigger than in anode due to, the exchange of current density in anode larger than in the cathode. (10 -4 vs. 10 -9 Acm -2 Pt at 25 °C) [2.21]. Static characteristic (polarization curve) The static characteristic of the fuel cell is represented by the polarization curve. First, activation over voltage occurs at FC electrodes, anode and cathode. However, the reaction of hydrogen oxidation at the anode is very rapid while the reaction of oxygen reduction at the cathode is much slower than hydrogen oxidation. Thus, the voltage drop resulting from activation losses is dominated by the cathode reaction conditions. The relation between the activation over voltage and the current density in anode and cathode can be obtained by as follows equations [2.18], [2.23]: [ ] Eq2.32 The specific reaction surface area is given by Eq2.33 Where the catalyst is mass loading per unit area of the cathode, is the catalyst surface area per unit mass of the catalyst and is the catalyst layer thickness [2.24]. Decrease the activation losses The exchange current density and activation losses has strongly related together. Then for reducing the activation losses it is necessary that the i o reduced. For this reason, especially at cathode side the values of i o is the most important factor to improve on FC performance. This is achieved by following ways: 1) Increasing temperature and pressure of the cell; 2) Apply more effective catalysts; 3) Increasing the roughness of the electrodes; 4) Increasing reactant concentration (Such as, apply pure O 2 instead of air). Note that activation losses have significant effect in low and medium temperature FC. However, at high temperature and pressure they are less important. Empirical equation for activation can obtain by: ( ) Eq2.34 Where A, b depend on the electrode and cell condition and V act is only valid for i>b (b=0.04 mAcm -2 ) . Internal and ionic resistance The overall ohmic voltage drop is calculated in the membrane layer. The polymer membrane used is a Nafion made by Dupont, which is widely used in PEMFCs. Nafion conductivity is highly dependent on membrane water content and temperature. Generally, the protonic conductivity of Nafion increases linearly with increasing water content and exponentially with increasing temperature. Hence, the resistivity of the membrane can be expressed by equations: Eq2.35 Where: Eq2.36 There are several mechanisms of water transport across a polymer membrane, namely the water diffusivity. The water diffusivity in Nafion can be expressed by the following expressions depending on water content λ: ( ) ( ) Eq2.37 ( ) ( ) Eq2.38 The membrane water content, λ varies generally between 0 and 14, which is equivalent to the relative humidity of 0% and 100% (ideal conditions) respectively. However, the parameter λ has values as high as 22 and 23 under supersaturated conditions. First, the membrane water content λ can be calculated using the activities of the gas in the anode and the cathode [2.1]: Eq2.39 The vapor saturation pressure is a function of temperature, which is given by [2.25]: Eq2.40 In the case of gas, the activities of the gas are equivalent to relative humidity. The index i is either anode (a) or cathode (c). The membrane water content is calculated by [2.25]: { Eq2.41 The average water activity a m is given by: Eq2.42 Thus, the membrane water content λ m is calculated by equation 2.41 using the average water activity a m , between the anode and cathode water activities. Since the proton conductivity of a polymer membrane is strongly dependent on membrane water content λ, the internal electrical resistance is a function of the resistivity of the membrane σ m and the thickness of the membrane tm [2.2]: Eq2.43 Finally, the ohmic overvoltage due to the membrane resistance R m in PEMFCs is given by the following expression: Eq2.44 Concentration losses Then, the voltage drop resulting from concentration losses can be approximated by the following equation [2.23]: Eq2.45 The parameter C, d, i max are constants that depend on the temperature, partial pressure of oxygen in the cathode and finally vapor partial pressure. These parameters can be determined empirically. Furthermore, the parameter i max is the current density that makes a precipitous voltage drop. In addition, empirical equation can be stated as follows [2.18]: Eq2.46 The value of m will typically be about 3 ×10 -5 V, and n about 8 ×10 -3 cm 2 mA -1 . Effective factor in concentration losses 1) Hydrogen is supplied from some kind of reformer 2) The air supply is not well circulated (problem in high current) 3) Removal of water 4) Internal current and Crossover of reactants The structure of polymers and electrolyte is not electrically conductive. However, some electron passes from membrane during hydrogen diffusion from anode to cathode. This fuel crossover and the namely internal current are basically the same phenomenon. The total electrical current can be calculated as follows: Eq2.47 Current density can be obtained from current divided by the electrode active area: Eq2.48 The Charge Double Layer If two different martial are in contract, charge transfer from one to another. In FC, charge double layer occurred between the electron in the electrodes and the ions in the electrolyte. As the example in Figure .2.7 cathode side electrons will gather on the surface of the electrode and H + ions will be attracted to the surface of the electrolyte as results, will be generated an electrical voltage. These accusations have been accumulated near the electrode-electrolytes that the similar behavior such as a capacitor in an electric circuit. Indeed, if the current suddenly changes, voltage takes some time to follow the changes of load. The capacitance of a capacitor is determined by the pattern: Eq49 Where : is the electrical permittivity, A: is the real surface area of the electrode d: is the separation of the plates Where defined as ( ) Eq2.51 The equation of the FC voltage as defined: Eq2.52 Where is a combination of two resistance activation and consideration. Eq2.53 These parameters are frequently changed with electrochemical characteristics, humidity, temperature, and pressure and aging effects. Polarization Curve The electrical domain allows describing the polarization curve and the associated losses (e.g. Activation, ohmic and concentration). Taking into account the latter, the FC stack voltage, E cell produced by FC can be expressed by the following equation: Eq2.54 Where , the theoretical potential can be expressed as the difference between the reversible potential at the anode and cathode [2.18]: Eq2.55 By comparison, E loss is the voltage drop resulting from losses (activation, ohmic and concentration) and can be expressed as follows: Eq2.56 The most important curve for characteristic of FC is a polarization curve that is shown in Figure .2.8. Thermal Domain The thermal domain describes the heat generation, heat exchanges by convection in the channels, heat diffusion by conduction or by mass transport, radiation and natural convection. Besides, in order to improve the accuracy of the thermal domain, water phase change has been taken into consideration. During stack operation, gaseous water condenses in liquid when the water vapor pressure reaches to the saturation pressure. Inversely, if the vapor pressure decreases, the liquid vapor can evaporate. During a phase change, temperature remains to be constant, but a heat exchange called "latent heat" takes place. The effect of the water phase change on the temperature distribution in PEMFC has been studied in the literature [2.25], [2.26], [2.27]. These authors have demonstrated that the water phase change has a large influence on the final temperature predicted by the thermal domain of PEMFC stack. The net heat generated by the chemical reaction inside the FC, which causes the rising or falling of temperature, can be written as: ̇ Eq2.57 ̇ ̇ ̇ ̇ ̇ Eq2.58 All the mathematical expressions of the Eq.2.57 are given in details in [2.18]. At steady state, ̇ and consequently the FC operates at some constant temperature. During transitions (e.g. Load change, operating conditions change, faults), the temperature of the FC stack will rise or drop according to Eq.2.57. In addition, efficiency and hydrogen consumption have been implemented in the FC model (Figure .2.9). The volumetric flow rate of hydrogen consumption in slpm (standard liters per minute) is given by the following equation [2.18]: In comparison, the FC efficiency is defined as a ratio between the electricity produced and hydrogen consumed [2.23]: ̇ Eq2.59 Eq2.60 Electric power produced is a product between FC stack voltage and current: Eq2.61 According to Faraday's law of electrolysis, hydrogen consumed is proportional to FC stack current: ̇ Eq2.62 Hence, the energy value of hydrogen consumed in Watts is given by: ̇ Eq2.63 Effect of the operating condition on performance of the fuel cell 3.1. Temperature Effect of temperature on the activation over voltage To obtain better performance in activation losses it is necessary that the temperature increases because the presence of the temperature effect, in the Tafel constant. While, the impact of increasing of the exchange current density is more important than any increase of Tafel constant by the effect of the temperature, this voltage drop is much nonlinear. According to Figure .2.10, the curve of activation voltages is reduced by increasing the temperature. Effect of temperature on the ohmic over voltage In the majority of FCs, the resistance is mainly caused by the electrolyte and the cell interconnects and bipolar plates. The three ways to reduce the internal impedance of the FC are as follows [2.18]: 1) Use the electrodes with the best feasibility conductivity. 2) Good design and use of suitable materials for the bipolar plates or cell interconnects. 3) To choose the electrolyte thin as much as possible. The effect of temperature rising on ohmic losses can be demonstrated by the Figure .2.11. FC function generally improves with rising in temperature. Nevertheless, the increase of temperature has a negative effect on voltage loss (see Eq.2.2). In addition, increasing the temperature has resulted from reduction of activation and concentration losses. Pressure Hydrogen and oxygen must be pressured at the fuel cell inlet. The performance of FC was changed by variation of pressure as follows: Effect of pressure on the activation losses Activation losses are related to sluggish electrode kinetics. The rising of the exchange current density is reducing the impact of the activation over voltage [2.18]. Figure. Effect of pressure on the concentration losses The reactant concentration at the catalyst surface depends on current density. Thus, increasing pressure causes to improve current density then reduces the concentration voltage losses (see Figure .2.15). Cell Voltage losses depends on Pressure The aim of raising pressure is because of the increase in FC voltage. FC operates at ambient pressure (1 bar) or it may be pressurized. An FC potential voltage is improved when the pressure is increased as it is illustrated in Figure .2.16, [2.2] and [2.18]. Humidity Affect Humidity on Resistive (ohmic) Lack of water in the PEMFC will run for membrane become dry. The parameters affected by this phenomenon are included drawback of proton transfer, reducing conductivity and increasing ohmic resistance. Thus, lead to decrease power generation efficiency. The different curves of ohmic resistance are illustrated in Figure. Increasing humidity improves the cause of conductivity in the membrane. Therefore, FC voltage will be modified. This variation is illustrated in Figure .2.18. The Table .2.1 summarized the effects of the operating parameters (e.g. Temperature, mass flow, humidity, current…) on different losses such as activation, concentration and ohmic losses in the FC. The used Symbols and abbreviations in this B, i L : The empirical constants will be affected by the operating condition in a fashion that is unknown. N/A: The parameter does not apply in any circumstances. Yes: Indicates that the operating parameter can be been incorporated into the model. (1), ( 2), ( 8), ( 9), ( 11), ( 12): It is assumed that the stack is operated so these parameters do not affect the model. The stack should be operated so that the membrane is well hydrated without stack flooding. This assumption may not hold in ( 1) and ( 2), in this case the opposition would be empirically modeled in regard to significant operating parameters. (3): The activation loss is defensible in the internal current. In the case wherein is too difficult to model it should be omitted from the model. If this were done in the model would it only be valid for currents above ~0.3A. (4), ( 5), (6), and (7): These parameters will only affect the performance of V ohm loss when the stack is run in extreme cases. At this stage, the limits of these extreme cases are unknown. If one of these parameters greatly affects the stack resistance then this parameter should not be operated to that level when the resistance starts to change. (10), ( 13 PEMFC diagnosis: Introduction of Fault Diagnosis The fault diagnosis includes the following three aspects: • Fault detection: distinguished fault detection means to discover the occurring fault with intrusive and/or non-intrusive methods. • Fault isolation: which is, finding the location of the faults. • Fault analysis and identification (classification): in order to arbitrate the type and magnitude of the faults and to estimate and prevent the future faults based on background studies [2.29]. The main use aimed in the present research work is to estimate the state of health of the FC so that to adapt the power control of the FC notably through the management of the degraded modes. In brief, Neural Network has been used in both methods because of the best choice of approximation in nonlinear. However, the processes of training need a large number of the data under different operating conditions that a gathering might costly and time consuming. Nowadays, modern control systems with different issues such as availability, cost efficiency, reliability, operating safety and environmental protection are taken into consideration. This involves a fault diagnosis system that is capable of detecting plant, actuator and sensor faults when they occur and of identifying and isolating the faulty component. The fault acting upon a system can be divided into three types of faults, see Figure .2.19 [2.33]. 1) Sensor (Instrument) faults. Fault acting on the sensors. 2) Actuator faults. Fault acting on the actuators. 3) Component (system) faults fault acting upon the system or the process we wish to diagnose. PEMFC Fault Conditions All possible faults that effect on the performance of PEMFCs was presented and the available fault detection technique compared and summarized in Faults Tolerance strategies Figure .2.20 gives the diagram of faulty strategies that include some kinds of planned and unplanned maintenance and planned and unplanned repair. Planned maintenance is based on fixed times and/or fixed run hours. An improvement leaves the fixed schedule and applying maintenance on demand, which is based on the observed real status. Planned repair is usually within set down periods while unplanned repair is forced by faulty components. A reconfiguration is possible if redundant components can be used, which requires a redesigned fault-tolerant system. Hence, maintenance procedures are performed to prevent failures, repair procedures (to remove failures and faults) and reconfiguration (to prevent failures through redundant components usually with some degradation of functions) [2.35]. Diagnosis levels In order to increase of reliability durability of FC, one of the most important things is fault diagnosis (FD). Nowadays, different diagnoses have been developed but FD is means that to detect, isolate and analysis faults that happened in FC according to different operating condition. The fault detection is to track faults happen throughout the operating condition. Fault isolation defines the place of fault in the system. Typically based on analytical model, two basic models can be deliberate, model base and non-model base [2.31], [2.32]. Model base/ Non-model base Diagnosis model-based methods are comparing the available measurements of the real system (experimental) with simulation system model (see Figure .2.21). These methods are categorized in three groups [2.31]: • Physical model (algebraic and differential equations), • Experimental model (non-liner and complex model), • Combination of the physical and experimental. Residual generation and evaluation Diagnosis purpose is to generate a fault-indicating signal-residual, using available input and output information from the monitored system. This auxiliary signal is created to reflect the beginning of possible errors in the analyzing system. The residual should be normally zero or close to zero when no fault is present, but should be distinguishable from zero when a fault occurs. In an Ideal condition, the residual is characteristically independent of the system input and output. The algorithm used to generate residuals is called a residual generator. Residual generation is a procedure for extracting fault signals from the system, with the fault signal represented by the residual signal (r). The residual should ideally carry only fault information and to ensure reliable fault detection, the loss of fault information in residual generation should be as little as possible [2.36]. In each fault detection algorithm, there should be a component of the evaluation based on the residual to be used by the analytical consideration. It could be applied in different methods such as fuzzy logic or neural network. In this stage, the decision about the existence of a fault is made together with a possible indication of this event generating the corresponding fault signal. This signal should carry information about the effect of the fault on the residual set so that the fault isolation module can isolate this fault [2.37], [2.38]. Different kind of Fault Diagnosis Various diagnostic tools employed in the characterization and determination of fuel cell performances are summarized into two general categories: 1) Electrochemical techniques. 2) Physical/chemical methods. Fault diagnoses consist of three levels (fault detection, Isolation and analysis), accumulation dates from system, fault diagnosis and fault classifications that are explained as follow: 1) Accumulation dates: for this purpose, different ways are proposed specifically, electrochemical impedance spectroscopy (EIS), Linear Sweep Voltammetry (LSV), Cyclic Voltammetry (CV), etc. The goal is to carry out the variation of output based on different operating conditions of input. 2) Extract fault from healthy mode: Depends on the fault extracting from the original data of system miscellaneous way such as, FFT, WT and STFT. 3) Faults classification: at this stage, the following methods such as NN, FL, Neuralfuzzy and BN applied more than other technics. Accumulation dates methods Polarization curve Different parameters can be characterized by polarization curve. Such as, cell polarization resistance, OCV, exchange current density, Tafel slope, etc. Hysteresis By registration of the plot due to increase (until limiting current) and decrease in current density, the hysteresis will be created by the two conclusion curves. This hysteresis can be useful to recognize the flooding and drying. Indeed, if in high current density, downward I (V) curve lower than the upward I(V) curve it means that the indicated flooding (in high current density more water will be produced by chemical reaction). Indeed, if in high current density, downward I (V) curve bigger than the upward I(V) curve it means that the indicated during. Individually this method is not sufficient for fault diagnosis in FC. These data are not enough to characterize the fuel cell performances (such as electrode diffusion, membrane resistance, etc.). It is used in steady state but it isn't a suitable model to evaluate losses [2.39]. Electrochemical Impedance Spectroscopy The Electrochemical impedance spectroscopy (EIS) uses small AC perturbation signal at various frequencies from (10kz to 1 Hz) in the dynamic state cell). The impedance of the cell can be obtained by taking the ratio of AC voltage/AC current (Figure.2.22). This technique can be applied to the electrochemical system (half-cell, signal cell, stack, etc.) This method is significantly used to characterize the water management flooding and drying). Many parameters can be obtained from this method such as, activation and concentration resistance and electrolyte resistance. The EIS is difficult to utilize in high power FC [2.30]. It is for fault diagnosis of PEMFC in situ and nonintrusive method. Membrane resistance measurement methods Current resistance measurement The three references of ohmic voltage loss are: a) Resistance in ion movement inside the electrolyte b) Resistance to electron carried inside the cell components c) Contact resistance Current interrupt method This method works in time-domain. The cell current is quickly interrupted and cell voltage is measured before and during the interrupt. This method applied widely in electrochemical devices (fuel cell, Battery, etc.) to obtain an ohmic resistance evaluation. The benefit of this model, there is no needing for any extra equipment because the interrupt can be directly come from a load. The downside of this model is data extracting will be degraded by using long cable connection and will put a critical perturbation on the cell [2.40]. High frequency resistance To achieve internal resistance in the FC, a small AC signal is used to apply the electronic load to adjust the DC load current. This model is suitable for congenital and periodical application along the normal condition cell (see Figure. High frequency milliohm meter method or AC resistance method In this method, external AC milliohm meter has been used to implement a signal and load performances and it is paralleled to circuit (see Figure.2.26). Based on related AC signal to DC current, least variation in FC will be measured. Hence, this approach is interesting in the investigation of the functioning of FC. However, the accuracy of this method according to determine the high resistance will become low [2.40]. Pressure drop method Due to friction in the electrodes and channel of gas flow, there is approximately 30% different pressure drop created between input and output [2.41]. According to the Darcy law for gas flow rate, pressure gas will be increased by water existence in fuel cells. In another world, flooding level is a direct impact on the decline pressure drop. Moreover, augmentation of water presence be related the decrease of temperature and increase in current. Water accumulates in cathode side more than in anode side because of air flow rate is slower than the hydrogen flow rate (dynamic viscosity in hydrogen is sluggish compared to oxygen). Indeed, flooding usually will be happened in cathode side [2.42]. Consideration on faults problem in PEMFC PEM FC is an electrochemical system that is based on electro-catalytic reaction, hydrogen oxidation in anode side and oxygen reduction in cathode side. In a FC, failures can be caused by: 1) Long time operation (natural ageing); 2) Operational incidents, such as MEA contamination or reactant starvation (see Figure .2.28). A common consequence of these failures is the voltage. In fact, if a fault occurs in FC, the voltage can be either increase or decrease according to the fault. In summary, FC stacks voltage is a first indication of a degraded working mode. In addition, water management and temperature are effects crucially important for healthy operation of a PEMFC. Water management in PEMFC Electro-Osmotic Back-diffusion and produced water produced by reaction have essential roles in water management. Drying at anode side with high current density because of the electro-osmotic that overcomes to back-diffusion phenomena. Fault degradation according to accumulation of water in FC [2.45]: 1) Drying; 2) Flooding 3) Benefit to increase proton conductivity ; 4) Blocks the gas diffusion layer and can lead to starving ; 5) Large quantities of water because of mechanical degradation for instance; corrosion and contamination. Drying Water is essential for proton conductivity in the numeric form of the membrane and active layers by dissociating the sulfuric acid bond. Leakage of water in fuel cell causes to impede of proton to the catalyst surface area thus activation loss will be increased. Isolation drying faults is occurred by comparing between osmotic drag at the anode side and back diffusion at cathode side (especially in high current in anode side electro osmosis is bigger than cathode side). In addition, at cathode side water created more than anode side. Eventually, decreasing of the lifetime of the fuel cell because of drying that creates holes in the membrane [2.40]. The probability of drying generally is happening on anode side. During long term, operating of fuel cell drying provokes to irreversible damage of the membrane and cause to break the membrane. The main factor that is created drying follows: 1) Feeding inlet gases without of sufficient humidification; 2) Increase of cell temperature results to enhance evaporation ; 3) Electro-osmosis particular at high current [2.45]. Flooding Flooding happened at cathode side and anode side. Flooding is occurring in accumulation water in flow filed channel or/and electrode cell. Then block the gas channels and after a several minutes, droplets drive to voltage drop quickly. Flooding occurred in all operating conditions especially in high current density. Short time flooding can be irreversible; however, oxygen feeding is blocked and conduces to mechanical degradation of the MEA material by long time operating fuel cell [2.40]. Flooding causes to increase in mass transport losses (in high current density). Thus, performance of FC is reduced. However, voltage can be recovered by fast purging at anode and cathode. Flooding can be affected on a lifetime and durability of FC in long term operation. The presence of water (in the long term operating) corrosion will be happened in electrodes, the gas diffusion media and membrane. Therefore, ohmic losses of FC are increased by this phenomenon and cause the performance of FC decrease [2.45]. Cathode flooding Water transportation at cathode side was occurred by following factors: 1) Water production in oxygen reduction reaction; 2) Electro osmosis is phenomenal to pull the water molecules from anode to cathode; 3) Saturated water by more humidified inter air gases. In addition, to eliminate water in cathode side influence factors as follows: 1) Back-diffusion will happen when water quantity at anode side more than cathode side. In additional, the influence of back-diffusion is in low current more than elector-osmotic. 2) Water evaporation is a way to speed up to removal of water in cell [2.45]. Anode flooding At cathode because of the creation water flooding will be occurring more than anode sides. 1) Flooding in anode mostly happens in low current density. Moreover, low temperature and high condensation in anode channel lead to anode flooding. 2) Back-diffusion can be factored of flooding at anode side. 3) Injection water for cooling and humidification are caused by flooding [2.45]. In brief, we need to avoid membrane drying at anode side and flooding at cathode side. Effect of operation condition in water management (flooding and drying) To avoid of faille in FC according to the improper water management solution such as variety of operating conditions that are suggested by many authors (pressure drop, temperature gradient, control mass flow by compressor, etc.). Humidity To obtain high performance in FC typically gas inlets are humidified. Buechi and Srinvasan investigated that operation at humidified inlet gases are 40% greater when FC figure out without humidity. Besides, Natarajan and Nguyen declare that with the increasing of humidity at anode side to reduce water transfer due to back-diffusion, going to increase current distribution, etc. [2.46]. Flow rate Nadia 2008 investigated for air flow rate. Stated that in low flow rate it is beneficial in keeping water in dry cells but will cause of flooding. Hakenjos at al, mentioned FC performance increase (out -put current density higher) with gain flow rate due to higher stoichiometry and the water removed from the flooded cell [2.40]. Temperature Pressure Electro-osmotic flow rate in normal operating condition is greater than back diffusion flow rate with homogeneous pressure. Wilkinson et al. observed water produced at the cathode side absorbed by the concentration gradient to anode and cause to prevent of flooding. Elevated temperature (evaporation) and gas flow contribute (contribute dissolve water) cause to reduce flooding. However, it cannot be guaranteed drying never happen in the membrane. Thermal management on PEM FC Thermal management is an important role to increase/ decrease of performance of the FC. Influence of freezing Freezing can effect on the durability of the FC via thermal and mechanical stress. Decreasing temperature causes to the reduction in proton conductivity of Nafion membrane. Most components that are influenced by freezing temperature such as backing layers, gas diffusion layer and membrane (rarely will be happened because water in membrane strong bond with captions). Start up from freezing When water at cathode side is not removed during start up with temperature below zero, ice will be covered in surface of GDL and cause to blocking at the catalyst layer. Finally, FC voltage drops and even shuts down FC. Influence high temperature Performance FC in high temperature has a few benefits such as increase electrochemical kinetics and the result enhance efficiency, advance endurance for contaminants and augments the water management and cooling system. However, the disadvantage is degradation of the cell and decrease in durability and lifetime of FC. 1) In high temperature, sintering and agglomeration of particles will be increased. 2) Operation at high temperature breaks oxygen molecules into oxygen atoms and reaction to carbon and water increase which results in increased contamination. 3) Performance FC at high temperature, conductivity of proton conductivity may diminish when at low relative humilities. Degradation of electrode/electro catalyst One of the most interesting objects to FC commercialization is corrosion of the electro catalyst layer. Conventionally, catalyst layer is made of platinum (Pt) or platinum alloy. For electrode at anode and cathode, usually the same material is used by carbon mechanical support. In addition, platinum catalyst covered by thin carbon layer. Degradation in catalyst and electrode mean loss and reform in the structure of the platinum. A corrosion carbon is manifest of the loss carbon along the surface of Pt. Two factors, humidity level and temperature are mainly serious aspect contributing to corrosion [2.45]. Cathode corrosion Electrochemical active surface area (EASA) decreases with relate to the time of the FC. EASA losses are due to Pt-particles distribution and long run operating condition of the FC. 1) Generally, cell potential cycling is the most serious influence in contributing to Pt agglomeration and oxidation thus to reduce of the EASA. Pt particle sizes extend due to the cell potential and will be accelerated when it compared to constant potential. 2) Variable temperature during operation: Principally voltage of FC increases due to the augment temperature while the negative effect is Pt-particles grow fast. 3) Low humidity in inlet gases effect to increase in the lifetime of the catalyst. Because, humidification level of gases is results to rise of catalyst particles [2.45]. Anode corrosion Wofgang et al found that long-term operation demonstrated that the anode no impact by Pt agglomeration/sintering, dissolution and oxidant [2.45]. Corrosion of gas diffusion layer (GDL) Carbon corrosion has negative impact on the catalyst properties and has a consequent negative effect on the performance of FC. Carbon corrosions are occurring as well as the following factors: 1) Potential cycling: Especially in high level and constant voltage, carbon corrosion will be increased. 2) Humidity has influences to carbon losses, GDL can be handled water management by using special fabrics such as hydrophobic material's ability to remove water and improve gas diffusion. However, higher hydrophilicity that means water is remaining in GDL and make to obstacles the pore and results in reduction of performance in FC. 3) Effect of temperature on GDL corrosion it is complicated because, some researcher are believing that it does not affect others, Wolfgang et al achieved and monitored the carbon weight loss [2.45]. Chemical and mechanical degradation of the membrane Despite of the membranes of Nafion have long lifetimes. However, in FC application is degraded very quickly (especially in electrical application during potential cycling). Many factors have been influenced to degradation of the membrane but two of them were important: 1) Production of hydroxh1 (OH) and peroxy1 (ooh) radicals due hydrogen peroxide (H 2 O 2 ). They chemically attack the polymer. 2) The chemical attack according to transient operating conditions (potential, humidification cycling and temperature) that causes of degradation in the membrane [2.40]. Corrosion and mechanics degradation of the bipolar plates and gaskets Three main factors for degradation mechanisms are as follows: 1) Material of bipolar dissolves in water and move into the membrane 2) Increase ohms resistance by forming resistive surface layer on the plate 3) Pressure that used for sealing causes the deformation of the plates [2.44]. Contamination of the cell Contaminations are produced inside the cell or they are carried into a cell with inlet gases. They lead to effect of the performance and life of the FC. Contamination of the electrodes/electro catalyst Carbon monoxide is harmful for electro catalyst. Co-concentration only happens in anode side. Pt catalyst layer observed Co-molecules results to block the hydrogen from reaching the Pt particles. This process in a long time will be happening. The voltage drop can recover by the air injected into the fuel stream because CO can be burnt by air. Contamination of the membrane and starvation Because of the conductivity and low level of the water at the cathode, contamination in the membrane causes to diminish the maximum current density [2.45]. Starvation degrades the FC performance and the cell voltage drop. One of the factors that cause to staring is generating hydrogen in the cathode and oxygen in anode. Faults synthesis A summary of the major failure modes is represented in Table .2.3. In the most cases, a combination of the inherent reactivity of component materials, harsh operating condition, contamination and poor design is responsible for the degradation. Neural network Deep review of the system is significant for determination of fault diagnosis method (FDM). In PEMFC unknown physical parameters the Artificial Neural Network (ANN) is one of the most interesting in FDM by comparing to other methods (for instance, fuzzy logic, support vector machines and Bayesian network). NN is a combination of numerous neurons that connect together via weighted. Artificial Neural Network (ANN) is a power full system for fault diagnosis in non-liner system modeling. ANN has the capability to learn and build non-linear mapping of the system. In addition, it is a good solution for modeling of complex systems. Fundamental section of an ANN is named neuron. Based on the structure of the neuron, there are three important topologies, single layer feed forward network, and multilayer feed forward network and recurrent network. In feed, forward all input signals flow one direction to output. However, for recurrent ANN, in the output some neurons are feedback either to the same neurons or to neurons in former layers. MIP type is the most common ANN that applied for PEMFC modeling. In Figure .2.29 illustrated the example of MLPNN with two hidden layers. In this figure Layer 1, 2 are hidden layer, , And Is weight between input/hidden layer, two hidden layers and hidden layer/output respectively. To explain of neuron is given by the following function [2.47]: ∑ Eq64 F: transfer function NN W i , j: weight of the connections between neuron j, and i B j : the bias X i : value input to the neuron S j :neuron's output The input of the hidden layer is calculated as: (∑ ) Eq65 Feed forward NN In feed forward network from input to output all signals flows in one direction. The most popular for minimizing training algorithm of weights is back propagation. Feed forward NN is suitable for static mapping between input and output and improper for dynamic evaluation. To solve this problem Recurrent NN can be replaced to feed forward NN. In this structure, neurons are feedback either to the same neurons or to pervious neurons. This means signals can move in two directions (forward and backward). Therefore, outputs have a quick response from the impact of inputs compare to feed forward [2.48]. Function in this NN for the first layer is "tansig" then "pure line" use in the second layer. Training NN The choice input data are important in training NN results. The most efficient variables for faults must be selected for training in NN. Otherwise, the amount numbers input impels a complex and slow to run the model. Notice that there is not any precise method to choose an optimal number of the hidden layer. This mean value of hidden layer depends on increasing of the output accuracy will be augmented [2.43]. For training an ENN weight matrix and the bias are adapting by using back propagation method. In other words, in the period of the training at every repetition, the matrices will be updated by minimizing errors between input and target (output). Training explained by: { } Eq66 P is the number of points used in the training and is input system and is output system. This set data must contain of the information on all different conditions such as safety and degradation mode. For this reasons chosen four inputs involve of 4000 points. Weight coefficients of the matrices are fixed by using a standard Back propagation algorithm. Data collection in NN Data is organized randomly into training, validation and test. Firstly, NN trained with first data then validations data for keeping off over-learning at the end, the test will be chosen by the data that NN have never trained. To facilitate training of the NN: Input data normalized between 0 and 1 ensure homogenized ranges and permit comparing the different weight related to different factors. The NN outputs recode to back right values [2.48]. The carry out of the NN was considered by executing a linear regression between both experimental and calculated values and evaluated (the corresponding Pearson Correlation Coefficient R). The statistical analysis describes the two data. If, R=0 that mean the correlation will be unpredictable else if R closer to 1 will be a better correlation between NN training and experimental data. Conclusion The PEMFC modeling and diagnosis are the most important issues treated in literature. A good diagnosis strategy contributes to improve the lifetime of the FC and then to improve the availability of the system built around it, as for example the drivetrain of FCEVs. It has been established that the FC is subject to a lot of fault during its operating. The latter are due to multi-physical phenomenon namely the temperature, the pressure and the humidity of the gas involved within the FC stack and cells. Several models have been developed to understand this phenomenon and to evaluate the FC performances according to different conditions of use but also to detect, isolate and classify the faults when they occur. On the basis of literature, it has been noticed that the ANNs are one of the most interesting in PEMFC fault diagnosis and modeling. ANNs have the capability to learn and build non-linear mapping of complex systems such as PEMFC. In this research work, the used diagnosis approach consists in combining the model and non-model based methods to train an ANN model. The next chapter will focus on the proposed equivalent circuit model taking into account the 3D geometry of the stack. Details on the modeling process as well as simulation and experimental results will be given to check the validity of the proposed model. Chapter III 3D Fault Sensitive Modeling of PEMFC Introduction As highlighted above, modeling and simulation are very useful tools in the study of complex systems. They allow ascertaining the impact of a great variety of conditions and variables for studying system global or local operating points. To reach this goal a fuel cell model has to take into account the space coordinates, time and multi-physical phenomenon. According to the dimensions taken into account the FC model can be 1D when only one dimension is involved in the model, 2D for 2 dimensions or 3D when the three space dimensions are taken into account. Within any one of these kinds of models, one has to include one or several different domains of physics: electric, fluidic and thermal. The principal physical phenomena found in PEMFC are listed by domain of physics in Table.3.1 [3.1]. Finally, the time parameter is introduced in the models to evaluate the dynamics of one of several of these phenomena. A PEMFC model is always a combination of the elements above. For example, a system can be 1D, dynamic and analytical, involving all three domains of physics with the different phenomena modeled at the individuality layer level. Many mathematical models can locally describe these phenomena by means of partial differential equations involving space and time [3.2], [3.3], [3.4], [3.5]. Many searchers have realized the importance of adapting models to system applications [3.6]. Mention may be made on the research works as in [3.6] and [3.7] that established suitable mathematical models for control purposes, for instance for automotive applications. Nevertheless, no model has included the three dimensions of the fuel stack in the formulation. Only the axial dimension has been generally considered by assuming all the FC cells are invariant across the transverse dimensions. The present research work proposes a 3D model of PEMFC for diagnosis purpose. Knowing that in the PEMFC around 50% of the chemical energy available in the fuel is converted to electrical power, and the rest is waste heat, a particular attention is given for temperature in the developed 3D model. This chapter is dedicated to explain the methodology used to build this model and to show how it is used for characterizing the FC behaviors under faulty conditions. A FC cell is considered alone to establish the modeling principle before generalizing the 3D model to a complete FC stack The proposed 3D model for one FC cell The chosen model is a semi empirical one using fundamental equations for known phenomena. For unknown phenomena, they can be modeled by realizing experiments on the fuel cell. Indeed, all the equations that belong to electrochemical model are used analytically. In addition, experimental tests have been used to determine the thermal mode and finding impedances in all branches. This model is built up of multiple points (nodes) at different zone of cells. It is preferable to choose the nodes that are located at critical zones of fuel cell (e.g. center of cell, inlet and outlet of gases and boundary zones). The considered cell consists of 9 nodes. All the physical phenomena that take place in these nodes will change depending on the position of the nodes and the variations of the corresponding operating conditions in terms of pressure, humidity and temperature. Moreover, several thermocouples and voltage measurements were used so that to know accurately the voltages and temperature in each node (More details on this issue will be explained in chapter IV). Description of the modelled FC Cell The studied fuel cell (MES-DEA single cell) has different layers included on both sides: the anode and the cathode (see in Figure .3.1). These layers are named: 1) Bipolar plate (plastic material) 2) Connector between (bipolar plate and cooling plate) 3) Cooling plate 4) Gas diffusion layer 5) Catalyst layer 6) Membrane layer Description of the 3D model applied on one cell This 3D model combines the electric and thermal domains in the dynamic state. The geometric description for the cell modelled is illustrated in Figure .3.2. The idea is to divide the cell into 9 elementary cells (zones) so that to take into account the differences of temperature, humidity and gas pressure in each zone. The 9 nodes represent the center of the 9 elementary cells which are respectively modelled by 9 elementary circuits. To ensure a difference between the potentials of the 9 nodes, 20 resistors are used. Thus the total cell behaviors in terms of current and voltage is obtained by the contributions of the 9 circuits. Because of the local current density and temperature distributions are closely related to various phenomena that occur in the cell, the sophisticated multidimensional is capable of predicting many phenomena occurring inside an operational fuel cell, but only to a certain limit due to the complexity and high computational cost. Therefore, the overall goal of the present 3D model is to conduct an experimental analysis with emphasis on temperature and voltage distribution inside a single cell (MES) and stack PEMFC (Nexa). The advantages of this model are: 1) All electric circuits are considered in all the available layers. 2) The simulation time, with electrical model, is about few seconds. 3) For each node, the current density is calculated. Modeling hypotheses These hypotheses are a compromise between electrical model and a mathematical model. They are summarized as follows: 1) The contributions of the anode, the cathode, and the membrane are not distinguished. 2) Pressure drop in the catalytic sites is negligible (both in the cathode and the anode sides). The voltage drop associated with the activation loss is negligible at the anode when compared with that of the cathode. Generally the voltage in all nodes is considered to be the same by the authors in literature. However, reality based on varying operating conditions (temperature, gas pressure, humidity) and material, voltages in each node have different values. In order to allow these differences the 9 nods (N 1 , N 2 , …N n ) of the cell model are separated by resistors as shown in red in Figure .3.5. The resistances between nodes are named according to their location within the anode side. Setting the transverse coordinates for each node according to the node's number that is to say (X 1 , Y 1 ) are the coordinates of the node N 1 , (X 2 , Y 2 ) the coordinates of the node N 2 and so on, the resistor R 12 is set between the nods N 1 and N 2 , the resistor R 23 is set between N 2 and N 3 ,..etc. That means the index of each resistor contains the two numbers of the departure node and the arrival node respectively: index 12 means departure node is N 1 arrival node is N 2 . These resistances have different values that can be the site of different distribution of current density. The operating conditions and material of usage in each node of the FC cell will manifest through the difference in current density distributions. To illustrate the operating conditions and ohms resistance in z axis 9 other resistors have been added between two cells, see Figure .3.6. This means 9 resistors are set between the 9 nodes of two neighbor's cells in Z. These resistances marked such as R 12 N1N1 contains two indexes: the top index indicates the numbers of the two neighbor cells (here the cells 1 and 2), the bottom index indicates the neighbor nods connected through this resistor (here the nodes N 1 of cell 1 and N 1 of cell 2). Notice that all the nodes connected together have the same nodes numbers. Electric Formulation This section describes the electrochemical formulation to compute the 3D steady state distribution of temperature and potential inside a stack. This method of modeling allows the study of the electrical behavior of large stacks with efficient time computation. The proposed 3D electric model (Figure .3.6) allows designing easily electric circuit connections to other electric components in the power train such as the power DC/DC converter. In this model, the electrical phenomena at the stack's level are highlighted instead of the electrochemical and mass transport processes that take place at microscopic scale as it is done usually. The knowledge on the latter allows calibrating correctly the physical parameters of that circuit. Thus in this work, a dynamic model has been developed by MATLAB software (see, Appendix 3A). This model is based on, electrochemical and thermodynamic characteristic of the PEMFC. The inputs of this model include the influence of temperature, gas pressure (hydrogen and oxygen), the voltage Nernst and other losses (Activation, Concentration and Ohmics). Each cell is composed by Nernst voltage, activation, concentration and ohmics losses electrochemical that are computed as follows [3.1]: Eq3.1 [ ] Eq3.2 The ohmics over voltage due to the membrane resistance R m in PEMFC is given by the following expression [3.1]: , Eq3.3 A relationship for voltage loss due to concentration polarization is obtained as follows: Eq3.4 By adding capacitor, dynamic system is involved in this model. Therefore, the voltage of each cell is computed by equation as defined below: Eq3.5 Eq3.6 Thermal domain In the thermal domain, the stack temperature can be obtained by using an empirical method (As shown in Figure. Dynamic effect of double layer The dynamic phenomenon of double layer capacitor influences the transit value of the stack activation and the concentration. This influence can be modeled by a first order system (See Figure .3.8). Computing the parameters of the 3D model In the first stage of modeling process the parameters of the proposed 3D model are computed theoretically as following:  The free load voltage E, is calculated with the Nernst Equation which is a representation of the relationship between the ideal standard potential E 0 =1.22 for the fuel cell reaction and the ideal equilibrium potential at other temperatures and pressures of reactants and products (see Eq.3.1).  The parameters (R con ,R act ,R ohm ) as follows: o R act : The first of these three major polarizations is the activation loss, which is pronounced in the low current region. In this region electronic barriers must be overcome before the advent of current and ionic flow (see equation 3.2).In this formulation current density is assumed of variable parameters. o R ohm : The ohmic loss varies proportionally to the increase in current and increases over the entire range of currents due to the constant nature of fuel cell resistance (see Eq.3.3). In this formulation the variable parameters are supposed the current density. o R con : the concentration losses occur over the entire range of current density, but these losses become prominent at high limiting currents where it becomes difficult for gas reactant flow to reach the fuel cell reaction sites (see Eq.3.4).  The double layer charge in anode and cathode is consumed as equivalent capacitors and is equal 1.8 F.  The temperature in all formulation between Eq.3.1 and Eq.3.4 is depending on the temperature measurement that gained from experimental test. Remark: Notice that all the parameters above will change according to temperature, pressure and humidity except the double layer capacitor. Newton-Raphson method The Newton-Raphson method, or Newton Method, is a powerful technique for solving equations numerically. The Newton-Raphson method is used to calculate the voltage cell and the current for each element. The latter is a very useful technique for solving equations numerically. It can also solve square non-linear systems of equations using matrices. Newton-Raphson method it solves equations of the form f(x) = 0 for the solution nearest a starting point of x = x 0 . It then creates a list of values x n where each x n (the nth element of this list) is the xintercept of the tangent line to y = f(x) at the previous list value of x = x n-1 . The solution vector by the Taylor series expansion in general is written for each Eq3.8 Eq3.9 This can be written more compactly in matrix from ( ) ( ) ( ) Eq3.10 Derivation of the N-R method is similar to the scalar case Eq3.11 To find solution to Eq3.12 Eq3.13 ( ) Eq3.14 Iterate until ‖ ‖ Newton-Raphson algorithms that consist on linearizing an equation around some points until convergence are reached. A Newton-Raphson algorithm was used to couple the analytical equation 3.1-3.7 with the experimental measurement of voltage and temperature. The measurements of temperature and voltage have to be done within the PEMFC stack so that too much data as close as possible to the nodes of the 3D model presented above. Newton Raphson method is used to match between temperatures, voltage and current density at those nodes. The Newton-Raphson problem is then set as the two operations bellow:  To assume the function f = E-V.  E: is defined by the voltage calculation of the cell in each node. This calculation is performed starting from the physical parameters of each elementary circuit evaluated analytically by using the measured temperature distribution (see the beginning of section §2.7).  V: is defined by the voltage measurements of each node.  To find the current density distribution to reach the equality f(x) = 0. Where the vector is the distribution of the searched current density. Given a function ƒ defined over the reals x, and its derivative ƒ', we begin with a first guess x 0 for a root of the function f. Provided the function satisfies all the assumptions made in the derivation of the formula, a better approximation x 1 is Eq3.15 The process is repeated as Eq3.16 Then the Algorithm of Newton-Raphson Method contains the seven main steps given bellow: Step 1) Measurement of temperature and voltage in each node (more details will be given in chapter IV). Step 2) Calculation of voltage of each node based on formulation Eq.3.6. Step 3) Evaluate numerically. Step 4) Use an initial guess of the current density to estimate the new value of the current density, as in equation 3.16. Step 5) Find the absolute relative approximate error | | as : | | | | Eq3.17 Step 6) Compare the absolute relative approximate error with the pre-specified relative error tolerance.  If ||> Relative error tolerance, to upgrade the initial guess of the current density and go to "step 4".  If ||< Relative error tolerance, then go to "step 7" -stop the algorithm. Step 7) Stop the Algorithm. Calibration of 3D model in healthy mode An important aspect of fuel cell modelling is that, using this model, the fuel cell can be simulated in both modes: faulty and healthy. A large number of parameters need to be implemented to create a complete fuel cell model under Matlab/Simulink software in order to model and to simulate the fuel cell in healthy and faulty modes. The temperature and voltage distributions that should be used to calculate the current distributions with the Newton-Raphson method are obtained experimentally.  Air stoichiometry of 3 and hydrogen stoichiometry of 2,  Current load of 10 A,  Humidified cathode and dry gas hydrogen are used. The proposed fault sensitive 3D model of PEMFC developed under Simulink software is shown in Figure .3.9. Hydrogen, oxygen gas pressure, current density and temperature are the inputs of the model while the voltages and currents are the out puts. Each subsystem circled in red contains open circuit; activations, concentration and ohmic voltages (see more details in Appendix IIIA). The cell potential at each point of the cell is calculated separately based on different current densities that are obtained by the Newton-Raphson method. The significant point in this model is the computation of the connection resistors at each point. These resistors can be useful to simulate the faults. Furthermore, calibration of these resistances is the one of the most important point of this model. These parameters change with variations of temperature, humidity, pressure and aging effects. Simulation measurement of current density and experimental measurement of the temperatures and voltage are shown in Table .3.3. The latter indicates that, due to the increase in the local current density, the temperature increases too. According to the data sheet of the MES Company (as show in Figure .3.10. the voltage measurement, with a current load of 10 A should be equal to 0.8V. However, according to Table .3.2 the voltage measurements are recorded between 0.65 and 0.69. In other words, the voltage drops between 110-150 mV at each node. This can be attributed to the change in the operating conditions such as pressure, humidity, temperature. Resistances are increased because of nut and screws are used to fix the cell. In addition, the existence of multiple voltage sensors and thermocouples increase these resistances. These losses are caused by irregular pressures of the bipolar plate and the connecting points. If the pressure in some points is above than the normal average, it can block the channel of hydrogen or oxygen gases. Nevertheless, cell voltage decreases with the pressure drops. Based on adding thermocouples and voltage sensors physical failing have been occurred and it rise to voltage drop in the cell. This phenomenon can be present by adding impedance in series of each node in the 3D model. That mean impedances (R 1 -R 9 ) are added in x axes in order to indication of internal voltage losses in the fuel cell model. It is obvious that the value of these resistances that it can be easily obtained by knows current density and voltage in each node. For example in first node the current density calculated by Newton Raphson. Also voltage measured in experimental test is 0.687 V then resistant can be obtained around the 0.1324 ohm. In this way all impedances can be calculated in the Table .3.4. It has shown the internal impedances for each node. Based on current distributions obtained by the Newton Raphson, Table .3.5 illustrates the percentage of current density of the nine nods compared to 10 A. It is obvious that in each part the activation area will be to 6.7 cm 2 (61/9) and mean values of the current density calculated from simulation results are shown in Table .3.6. Network circuit analysis To construct an equivalent circuit of a complicated process (e.g., electrochemical parameters such as voltage losses and voltage reversible with impedance in series and parallel) and calculate its impedance, more knowledge about the network circuit is indispensable. The major factors are the parameters that change according to the operating conditions. A commonly used network analysis method is the loop and mesh analysis, which is generally based on KVL. The series of equations are in the form of ([Z]. [I]= [V]) can be established by equating the sum of the externally applied voltage sources acting in each loop to the sum of the voltage drops across the branches forming the loop. The number of equations is equal to the number of independent loops in the network. The general equation in the loop or mesh analysis is given by [ ] [ ] [ ] Eq3.22 Where the impedance matrix [Z] is an N × N matrix, as described in Eq.3.26 The following rules describe how to determine the values of the voltages, currents, and impedances in Eq.3.22.[3.9]. 1. The voltages in Eq.3.22 are equal to the voltage sources in an each branch. If the direction of the current caused by the voltage is the same as that of the assigned current, the voltage is positive. Otherwise, the voltage is negative. 2. The series of mesh impedances, known as the self-mesh impedances, Z 11 , Z 22 , Z 33 , …, Z NN , are given by the sum of all impedances through the loop in which the circulating current flows. 3. Each mesh mutual impedance, denoted by Z ik (i ≠ k), is given by the sum of the impedances through which both mesh currents I i and I k flow. On other words, the mesh mutual impedances are equal to the sum of the impedances shared by meshes i and k. If the direction of the current Ii in loop i is opposite to that of the current I k in the adjacent loop k, the mutual impedance equals the negative sum of the impedances, whereas if the direction of the current Ii is the same as that of the current I k , then the mutual impedance equals the positive sum. In a linear network, the following can be obtained: Z ik =Z ki . A linear matrix equation can be solved by the application of Cramer's rule. Assuming the determinant Δ of the matrix Z is non-zero, the solution of the current can be expressed as . Eq3.23 Where [Z] -1 is the inverse of [Z], which can be expressed as represents the matrix transpose. Δ and Δ ki can be expressed as follows: Eq3.24 Where Δ ik is the matrix cofactor and (Δ ik ) T = Δ ki represents the matrix transpose. Δ and Δ ki can be expressed as follows: [ ] Eq3.25 [ ] Eq3.26 Where | [Z]| is the determinant of [Z]. For an easy calculation, memory space less generally, in mesh grid used admittance rather than impedance. However, bus impedance is used for short circuit study. A set of equations can be established with the form of [Y]. [V]= [I]: [ ] [ ] [ ] Eq3.27 Generally to calculate Y admittance: [ ∑ ∑ ∑ ∑ ] Eq3.28 An equivalent circuit of this model is included the parallel and series impedances. These Impedances account the voltage and temperature distributions. Without considering of temperature, effect then fuel cell electrical model is included of the activation loss, consideration loss, ohmic loss, double layer capacitor and Nernst voltage. By Adding impedances in parallel and series Z 1 , Z 2 … we can highlight the effect temperature in the electric model of PEMFC in different space directions (X, Y and Z). This is because of there are physical relationships between temperatures and these impedances. Furthermore, considering these impedances to simulate the PEMFC the accuracy of the PEMFC model is improved. Thus, for any reason, variation temperature in different directions in the fuel cell stacks changes the value of these impedances. Moreover, any fault that related to variable temperature such as flooding, drying, and degradations may be considered in this model. First, a model for a stack including two-cells was developed and the simulation results were compared with experimental results (see Figure. The 3D model applied to one stack The Considerations on the 3D model calibration The concept of calibration is an important step in model validations. Calibration task involves systematic adjustment of model parameters. It allows an estimation of the model outputs. The calibration of the stack fuel cell can be summari zed as follows: One of the simple ways to calculate the impedances in the 3D model (see Figure. Calculation of reduction and downsize in it should suppose that the impedances in a non-diagonal matrix for the impedance interaction cells are equal t o zero. This model is a semi empirical electric model. That means, we used a dynamic electric equation with regard to the temperature influence for voltage losses and reversible voltage of the fuel cell. In this model, an empirical thermo equation of the MES fuel cell is considered by using experimental test. To build a 3D model, first, one should study the influence of operating conditions on the fuel cell performances. Then one should simulate all the effects of the polarization curve and voltage losses. Electric model was chosen to take advantage of its ability to save time. However, in reality, the electric model was not sufficient to demonstrate all variable in different directions. As illustrated in Figure .3.14, the fuel cell is divided into different branches, following x, y and z axis. At each branch, the impedance and the electrical mode are connected together. Otherwise, the single cell is divided in several electric models connected to each other with several impedances. In the literature, all the studied models are based on the irregular distribution of the current density. Specifying the nature of impedances is not an easy matter. So, to reduce the calculation in impedance matrix, we consider only their magnitudes (i.e. we assume they have resistance behavior). To take into account the impedance phases is too laborious especially regarding to the necessary experimental measurements to perform (a spectroscopy measurement will be necessary in each node). However, as a future work to include the phase's aspect in the computation may improve the model accuracy in dynamic states. To find the complete resistance in each branch, several special tests at different currents should be carried on (designs and details of test bench are discussed in the next chapter). In addition, all temperatures measurements and voltages in each node (using the voltage capture and thermocouples) should be registered on this test. Then, to obtain current at each node, first, it should be compared to the real voltage using the simulation results. Second, temperature measurements should be applied in theoretical equation. Third, the NR method for nonlinear equations should be used. Calibration of the 3D model of FC Stack (Two cells) The concept of calibration is an importan t step in model validation. Calibration task involves systematic adjustment of model parameters. The calibration of the 3D model of the PEMFC stack can be summarized as follows: 1) Voltage and temperature are recorded as shown in Tables .3.6 andTable.3.7 (12 sensors for each cell). 2) Application of Newton-Raphson method to calculate the current density as illustrated at the right sides of Tables.3.6 and Table .3.7. 3) Calculation of the network impedance that matches the model. In the Table.3.7 and Table.3.8, the voltage and temperature are measured in experimental test performed on two cells (Figure .3.15) with three different values of current load: 5 A, 10 A and 15 A. For all these measurements, the H 2 stoichiometry is set to 2 while O 2 ones is set to 3. In the X direction, the voltage and temperature variation based on domination of MES cell can be negligible. Moreover, in order to simplify calculations, all the sensors have been installed in 9 nodes in each cell with the 3D model. However, voltage sensors and temperature are limited only in three nodes (see Figure3.15). Besides, mean values of 4 proper sensors have been used to execute the calculations in each zone. As depicted in this Figure .3.15, each cell is divided into three zones. These zones include the inlet, the middle and the outlet. According to the results in cell one, voltage and temperature have been changed by different operating conditions from inlet to outlet. It is shown in Figure .3.15, nodes 1, 2, 3 in cell one and nodes 4, 5, 6 in cell two are connected via impedance in y axes, and every two node that are in front of each other in each cell are joined by impedance in z axes. The variations of operating conditions from cell one to cell two are also represented. In addition, mechanical parts such as connectors and several sensors installed to measure the voltages and temperatures can increase the voltage losses in each cell. Table.3.7 and Table.3.8 (middle) represents the voltage measurements. It can be noticed that the values are situated between 0.577 V at the outlet, 0.732 V at the inlet for cell one, and 0.588 V at the outlet, 0.709 V at the inlet for cell two. This means there are voltage drops about 0.155 V and 0.121 V between inlets and outlets of the cells one and two respectively. This can be attributed to the changes in the operating conditions such as pressure, humidity, temperature, etc. Also; it may be related to circuit physical components of the 3D model such as increasing ohmic resistance by adding a lot of voltage sensors and thermocouples. It can be also noticed that the current densities are not homogeneous. This may be related to the variations in the operating conditions as the temperature distributions that are higher in cell two than in cell 1. It should be seen that, the current density distributions of the cell in x, y and z axes have different values. This phenomenon can create different voltage in each point of the cell. The effects of the impedance in each cell can change the current density distributions in the fuel cell stack. These effects are able to use the fault isolation in the stack PEMFC. Hence, the change in the impedance in each direction represents a change in the related current density. Moreover, the deviation in the current density is related to the operating conditions such as temperature. Thus, the fuel cell would be changed from normal to a faulty mode based on the variation of the operating conditions. This is the aim of the next section §4. Simulation of PEMFC in faulty operating modes Generally, fuel cells operate in two modes: healthy mode, which means that, the fuel cell operates under normal conditions and degraded mode where the FC operates under faulty conditions. These faulty conditions can be caused by: 1) Long time operation (natural ageing). 2) Operational incidents, such as Membrane Electrode Assembly (MEA) Contamination or reactant starvation. The degraded mode indicates that there is an abnormality in the FC operating conditions such as variations of temperature, causing a fault and/or a performance loss in fuel cells. The common faults that happen in the fuel cell can be divided into two categories: flooding (cathode side and anode side) and drying faults [3.9]. Flooding at cathode side Flooding at the cathode side is a common problem for the cells. It is caused by an excess of water produced sometimes on the cathode side when the stack is operating. In case of cell flooding, the water film which is formed on the cathode side of the cell, blocks the oxygen diffusion into the positive electrode (oxygen reduction reaction site) thus decreasing the cell voltage. The magnitude of this phenomenon strongly depends on the stack current, stack temperature and reaction airflow rate. Flooding at anode side This phenomenon is as common as the previous one. It generally occurs during the "reconditioning" procedure because of the complete filling of the anode compartment with deionized water. In this case the water film blocks the hydrogen diffusion to the negative electrode (hydrogen oxidization reaction site) thus decreasing the cell voltage. In practically, the voltage will increases after every purge event. The single cell nominal voltage (about 600 mV) because of the flooding in anode side voltage immediately after the purge event decreases fast again. In other in other worst cases you have a constant single cell voltage around zero volts (in the range +/-50mV). Drying in membrane Drying of the membrane is another common fault that occurs in fuel cells. It can cause damage at the membrane level by creating holes in the structure of the polymeric one. This phenomenon occurs accidentally when the temperature is near to 70 °C. The direct consequence of such an event is again a very low voltage (near zero Volts or in the worst case low down to -1.4V). If one cell has some holes in the membrane, its voltage decreases very fast with respect to the normal single cell normal behaviour. For example, the Figure .3.16 emphasizes the link between the relative humidity and the state of the membrane, which can be either wet or dry. It can be readily seen that for most operating conditions, the membrane of the FC is either too wet or too dry. Furthermore the humidity should be above 60% to prevent excess drying, but must be below 100% to prevent of flooding. Higher temperatures cause better performance, mainly because the cathode overvoltage reduces. However, once over 60 °C the humidification problems increase [3.10]. For instance, if the humidity gases inlets increased, more water would accumulate in the cell. Hence, flooding could occur and block the gas inlet. By comparison, if the humidity was too low, less water would accumulate in the cell, leading up to drying [3.11]. For this reason, humidity range has been selected between 50% and 120% in order to take into account flooding, drying and normal mode. Besides, if the humidity range is included between 80% and 100%, the FC works in healthy mode; whereas for more than 100%, flooding case occurs and less than 80%, drying case occurs. In the same way, if the inlet gas pressure increased, flooding water would occur; while in low pressure, would lead up to drying. According to the characteristic of the FC, pressure has been chosen with a different range. Hence, for this paper pressure range has been selected between 0 and 2.2 bars. In other words, FC operates in healthy mode for a specific range, namely between 0.7 and 1 bar. On the other hand, if the pressure is included between 1 and 2.2 bars, indicates the presence of flooding in the FC; whereas a pressure lower than 0.7 bar leads up to dry. Indeed, the temperature is included between 0 and 70 °C. (This range depends strongly on the technical characteristics of FC). Starting from these different operating conditions, a 3-D fault diagram has been sketched in Figure .3.18. The latter allows summarizing the studied faults in FC, namely drying and flooding in terms of temperature, pressure and humidity. Simulation of faulty modes examples To simulate the effect of faults introduced in different zones of the circuit model, a given DC load current containing some typical harmonics identical to those one can find in a DC/DC boost converter generally associated to the PEMFC, has been supplied from the FC. To simulate the fault first one 3D model calibrated in healthy mode is used. As mentioned before, the impedances in different zones depend on the temperature and other operating conditions of fuel cell. In addition, impedances are attached to z axes i.e. they represent the connection losses between the different FC cells in the Z direction. Indeed, changing one of these impedances can change current distribution in the cell. This behavior can be used to simulate faults in each point of cell. The measurements of the mean value and the first seven harmonics of the output voltage, in the steady state operation, allow computing the corresponding Harmonic Distortion Rate (HDR) and the mean values voltage variation according to the healthy value (MVV). These two parameters are used to characterize the different faults taking into account the 2D space coordinate of the fuel cell. The Figure.3.18 gives some examples of this characterization process. The variations in impedances at the branches in the middle of two cells are represented. The z axes have been considered to realize these simulations. The significant point in these figures is that changing an impedance value of impedance affects the output voltage of the cell. By increasing and decreasing the impedance of this model current density in each point will be changed and it can be assumed the drying and flooding faults are happed in FC more detailed will explained in chapter V. Conclusion The proposed 3D model is implemented under Matlab/Simulink software and it has been validated experimentally in healthy mode on one air cooled PEMFC. The circuit approach has been used to divide each cell of the modelled FC into several elementary cells. The case of a circuit of 9 nodes has been studied and explained. This allows creating the most common faults in any were within the 3 space directions of the FC stack. The idea is to use this model for training an ANN model that will be used for on-line diagnosis of the PEMFC but also in the management of its degraded modes. However, to achieve this goal one has to calibrate the 3D model in healthy mode. Such operation requires a lot of experimental data and a huge work to build the adequate test benches. In the next chapter, a special focus will be done on the developed experimental work for calibrating and validating the proposed model. Table of contents of Introduction Two set-ups have been developed to validate the proposed 3D model. Because of the difficulty to introduce faults in the FCs without destroying them, only the healthy mode has been focused in this study. The first one concerns one FC cell from MES-DEA technology. The second one is a FC system from Ballard technology (called Nexa stack FC). Both technologies use the air-cooling system for FCs cooling. In this chapter, the two set-ups are presented with the corresponding environmental hardware and software materials. The obtained results are exposed and commented regarding to the validity of models. Single cells set-up In order to validate the 3D fault sensitive model of PEMFC cell a test bench for a single FC cell has been carried. Different parameters have been controlled to test various operating conditions such as, gas flow or pressure, temperature, air humidity rate, airs and hydrogen stoichiometry's. In addition, an electronic load is considered to simulate the load dynamic variations, and to take into account constraints related to transportation applications. Gas supply description The suitable pressure in the range 1-3 bars, according to the manual indicator, feeds the oxygen for the stack operation. For this reason, special devices have been designed to connect the air supply to the bipolar plate in the cathode side (see Figure.4.1). As shown in this Figure, the two parallel inlet channels of the stack are embedded on the top of the cell. Thus, the air is able to overcome the pressure drops of the cathode side and feed each compartment of the entire cell. Then, the exhausted air enters in the parallel outlet reactant air channels that finally drive it outside the stack. In order to supply the stack with hydrogen, the supplied H 2 inlet tube connector has to connected the to the hydrogen source via a proper tube. A flexible or rigid tube (e.g. silicon or Teflon respectively) should make the exhausted hydrogen circuit. The nominal flow of this exhausted hydrogen is 0.28 Nlt /s. The supply pressure of the hydrogen should be adjusted to a set value of 0.5 bar overpressure. The physical references of the MEAs The complete equipment consists of one cell with a connector and an isolator. These connectors have realized the air-cooling and the electric connection (current collector) between the bipolar plate, at the anode and the cathode sides. The isolators were used to prevent the hydrogen and oxygen leakage from the inlet and the outlet. As illustrated in Figure .4.3 each cell includes: a membrane with an active area of 62 cm 2 ; a gas diffusion layer with a thickness roughly equal to 0.42 mm and a graphic block in the anode and the cathode side. More details about the component characteristics can be found in Table.4.4 [4.1]. Cathode side of single cell with inlet and outlet of hydrogen and oxygen. Single cell Isolator Description of the test bench The structure of the test bench can be divided in four parts according to Figure.4.4 [4.2]. The supervisor block includes the user interface. It collects measured data, transmits the operational orders and manages safety processes. The Ancillaries that consist of different actuators and sensors apply the control and transmit back the measurements. The tested stack is equipped with specific sensors such as measurements of voltages across each elementary cell, thermocouples, current, etc. The electronic load can be computed to impose a given time evolution of the stack current [4.2]. Supervisor and control The control has been implemented on a National Instrument PXI platform (Figure. The software interface allows the user to choose the fuel cell running mode and the parameters to be controlled. The system can run automatically, following either a computed load cycle or a manual operation, according to the user's need. An interface panel (Figure. In the "Settings" part of the supervisor (see Figure. Ancillaries The structure of the test bench is illustrated in Figure .4.8 Actuators and sensors implemented in the fluid circuits, such as hydrogen and air distribution, humidification rate control, cooling loop and regulation of water temperature are presented [4.2]. Electronic load The electronic load allows the performing tests to characterize the static and dynamical behaviors of the fuel cell and the simulate high frequency disturbances (chopping frequency). It can be directly controlled by the supervisor. Their nominal values are equal to 800W, 120A and 20 kHz bandwidth. Figure .4.9 shows the electronic load in the test bench [4.2]. Thermocouple The most common, accurate and practical method, to measure the temperature distribution within a PEMFC cell is the thermocouple. The thermocouples have nice features, such as their simple configuration, high accuracy (0.1 °C), fact response and large measurement range. The thermocouples are widely used as point temperature measurement devices that consist of two wires of different materials joined at the end. When the two junctions are subject under different temperatures, a small electrical current is generated, which leads to a small voltage drop. The available types of thermocouples can be classified according to the American National Standards Institute (ANSI) standard as K, J, N, R, S, B, T, or E. The 12 th thermocouple type K type and the isolated parts have been selected to control the distribution of the temperature cell during its performance (see Figure Calibration of the Thermocouples The chosen thermocouples should be calibrated before starting using them. The thermocouple type K can be configured through the DPI 620 Series (see Figure.4.12). The GE Druck DPI 620 Series Advanced Modular Calibration and HART Communication System can measure and provide the mA, mV, V, Ohms, frequency and a variety of RTDs and T/Cs. To calibrations, all the thermocouples and the reference thermocouple (The Canne Pyrometrque of type 14 (see Figure.4.14)) are placed in the BINDER incubator of the BD series (see Figure.4.13)) Therefore, temperature correlation between the thermocouple's and the reference temperature is developed. Because of the all temperature inside of the Binder are become homogeneous until the reference thermocouple shows the same temperature as the Binder temperature. It is necessary that reference thermocouple with others thermocouples are placed in the Binder at specific temperature during the 1 hour. In the next step, the advanced measurement device DPI 620 measures all the thermocouples. Thus, the results are shown in Table.4.6. in this table the reference temperatures are compared with all thermocouples (12 thermocouples are used). In Table this table maximum errors are recorded in thermocouple 3 and 4 with around 2.6 %. This error would be due to the long wire connector usage or could be related to the thermocouple conjunction. The mean values of the errors (see Table.4.7) for each thermocouple are calculated with different temperatures and all the thermocouples are compared to others. It can be remarked that the maximum error (~2.6 %) is localized between the third and fourth thermocouple. Therefore, as preliminary results, the difference of temperatures between these thermocouples can be negligible in temperature distribution measurement. Voltage In order to investigate the voltage distributions at different parts of a single cell, 12 voltage sensors were installed in the same place as the thermocouples. They were stuck directly with tolerable adhesive (with temperature) in the bipolar plate of a single cell. In addition, it isolated from the connector. Hence, they can achieve the existing relationship between temperature and voltage (see Figure Voltage sensors choosing for measurements Generally in this type of fuel cell (MES fuel), an air fan has been used for cooling system. In particular, test for the influence of temperature (consider the distribution temperature) in performance of fuel cell air-cooling was omitted and 12 thermocouples were replaced in during the cell for control of temperature in a single cell (as shown in Figure.4.19). These thermocouples are isolated from the electrical parts for preventing of short circuit between thermocouples body and bipolar plate. Voltage sensors directly connected to the graphic parts in anode and cathode sides. The sample frequency was adjusted in 3 Hz in the control test panel. Current loads profiles are set constant between 5 and 15 (A) as long as the temperature in thermocouple inlet and outlet stabilized. To select acceptable data from measurements of voltage firstly fuel cell has been run without load just with input gases (hydrogen and oxygen) data measurements noted (see Figure.4.20). In the other world only Nernst voltage was affected in these measurements. The various voltage acceptable measurements can be selected in the range of the 0.8-0.95 V. This range can be confirmed due to the data extract from datasheet of MES fuel cell based on polarization curve. In each cycle this process are repeated. Validation tests of one single cell The dynamic characteristics of voltage and temperatures along the one cell in different zones are investigated for various operating conditions: Load current, air and hydrogen stoichiometry ratio and different boiler temperature have been used. The sophisticated test bench is applied to study the dynamic characteristic of the voltage and temperature distributions in single and stack PEMFC. Each individual operating condition is used in a complete cycle up to that local temperatures in the each points of cell become stabilized. For example the air stoichiometry ratio has increased from 3, 5 and 7 then decrease, temperature measurement recorded until the temperature values become constant. In each local temperature and voltage is recorded by the data acquisition system at 3 Hz over a period time from the changing the experimental condition as illustrated in Figure .4.22. This procedure is repeated for other operating conditions to study their impact on the dynamic of the voltage and temperature distributions. All experimental test investigations related to the temperature distribution measurements are carried out using the PEM fuel cell in single cell (MES fuel cell) as shown in the Figure .4.19. 12 Micro thermocouples of K types are used to measure the temperatures with PC-based data acquisition system. It must be noted that at the same place that the temperature voltages sensor was placed, for also measuring voltage in order to explore the relation between these parameters (temperature and voltage). Thermocouple and voltage sensors are placed in different locations along the fuel cell (x, y and z axes). The measurement of temperatures in FC can be waited to stabilize by passing minims 360 second. Many parameters must pay attentions to the following of temperature measurement. For Instance different heat transfer processes, such as conduction, convection and radiation can have considerable influence on the temperature measurement. However, it has pointed out in reference [4.1] the effect of conductive heat can be minimized by using long thermocouple wires. The present experiment is used the more than 2m length connector between the acquisition system and junction. The heat radiation in not expected to affect in low temperature (Maximum 60 °C). The thermocouple connections are insulated by rating between -10°C and 105°C. Testing is performed prior to actual testing Table .4.8 summarizes the main operating conditions adopted in the present study. In the present study, each temperature measurement is collected by the data acquisition with a sampling rate of one reading each 1 second. These measurements are investigated over the intervals where the temperature is constant after changing the experimental conditions. This procedure is repeated for different current loads and different stoichiometry air and hydrogen ratios. Voltage sensors It is noted the advantages of the present measurement is that the thermocouple probes are not directly contacted to the reaction site. Furthermore, the voltage sensors are located before the current collector. Otherwise, chemical effects produced by the reaction such as combustion processes or catalytic reactions may lead to unexpected significant errors in the temperature measurements, and may cause same voltage in the current collector's plate. The local temperature distributions are measured in anode and cathode sides by inserting 12 thermocouples (Type-K) in the GDL. The thermocouples have a diameter of 1mm and the specifications of thermocouple are includes of:  Mineral insulated Type 'K' Thermocouple;  310 stainless steel sheath;  Highly flexible, sheath can be bent/formed to suit many applications and processes;  Insulated hot junction;  Probe temperature range -40°C up to +1100°C;  Miniature plug termination (200°C);  Conforms to IEC 584 specification. Temperature measurements across PEMFC It is desirable that the PEMFC operates at uniform temperature distributions. Non-uniform distributions of the temperature could result in poor reactant and catalyst utilization as well as overall cell performance degraded and also caused to faults occurred in the fuel cell. In additional, the polymer membrane is very sensitive to temperature variations, and the hydration of the membrane depends strongly on the temperature of the cell because the water vapor saturation pressure is an exponential function of temperature. In order to obtain temperature profiles across the PEMFC, 12 thermocouples are placed in different locations within the experiment set-up. Effect of Air stoichiometry on temperature distribution The effect of the air stoichiometry ratio on the temperature distribution along the channel for three different current values (current 5A, 10A and 15 A) are shown in Figure .4.24 (more results are given in Appenix VIA). The air stoichimetry ratio effect on the temperature is highlghted in these Figures at operating conditions: the reactants are humidified on the cathode and dry on the anode. It is obvious that the local temperature increases when the stoichimetry ratio decreases. they are declined by growing the stoichimetry in cathode side. This can explain that the stoichiometry can be as useful as the air cooling system. Also, it can said that the increasing the air stoichiometry ratio has a positive impact on the overall cell potential. In the following analysis of results all temperature measurments hold on at cathode sides. Indeed, the temeprature at the cathodes sides is more than in anode side because of the activation losses are directly proportional to the rate of electrochemical reaction and the activation at anode side are negligible at cathode sides cathode. A. The anode and oxygen stoichiometry ratios are fixed at 3 to 6 while the hydrogen one is fixed at 2. The oxygen sides are humidified with a boiler. Here also, it is clearly seen that the temperature of the anode side is lower than the cathode side by more than 1°C. These Figures shows also that the local temperature between the anode and cathode are increased follow to increasing of the stoichiometry oxygen. This can be caused with increasing stoichiomtery, the temperature in cathode side will be decreased. It is obvious and it can be using these Figures that current load increase implies the temperature increase. . For comparision of temperature in different direction of single cell Figure .4.28 shows the temperature distributions across the cell for the three current loads of 5 A, 10 A and 15 A. The anode stoichiometry ratio is fixed at 2 but the O2 stoichiometries are selected to be between 3 and 5. In this test the channels are divided into three different regions in depending on the temperature values. These three sections are : inlet on the top, middle and bottom of the cell. Furthermore, in each region, the temperatures are recorded with four thermocouples, placed in series and located in the inlet, middle and outlet zones. These Figures show the the temperature measurements for the inlet, middle and outlet of the cell. The highest temperature in the profile is recorded at the outlet of the channel on the cathode side. This can be resulted by the heat generation and the transfer to the outlet. The temperature is greater in the middle points than in the inlet. The most important conclusion is that the temperature in three regions has very similar behaviours. That means, the temperature increases in the outlet, middle and inlet of the cell have same correlations, in function of time. Also Figure .4.28 indicates that, as the time passes, by increasing the current up to 15A, the middle and inlet temperature become higher than the outlet temperature. This may be caused by drying faults happens inside the cell. It can be noticed that the temperatures increase gradually from the left side to the middle and then decline at the right side, in reference to x axis. The temperature distributions are represented in the y-axis and they are compared within different regions. As shown in these figures, the temperature from inlet to outlet increases progressively. Hence, at high current loads with a hydrogen stoichiometry of 1.5, temperature at the outlet decreases in y axis. However, when the stoichiometry hydrogen is equal to 2, this problem can be removed by reducing the temperature (see the Figure . These results above show that from one zone to another zone clear differences are observed in temperature and voltages for any conditions of stoichiometry and current load. It has been noticed a difference about 2°C of temperature between inlet, middle and outlet of the single cell. For the voltages a difference about 9mV has been measured. This confirm the hypothesis done in the chapter III where each FC cell is assumed as the combination of 9 elementary cells connected to each other in 9 nodes having different voltages and different temperatures. Thus to do the 3D modeling for the single cell tested above the calibration process (Cf. Chapter III §2.3) is used. Measurements of the voltage in x and y axes together Calibration and validation one single cell In order to calculate impedances in different directions in one cell, the calculation of the local current density has to be defined at each node based on using Newton Raphson method (see in Table.4.9). In additional, the local resistance based on the current density can be summarized in Table.4.10. Furthermore,comparisons In order to check the validity of the so obtained model a simulation of polarisation cureve of has been performed. The Figure.4.38 shows a comparison between the simulation results and and experimental measurments. Three polarisation cureves are obtained accronding to the three test conditions performed. The model adjusts itsef by switching from one to another of these three polarization curves. This is very interesting for both purpose of control and diagnosis. This indicates that the model in healthy of the FC stack (of two cells) is valid according to each measurements so, it is now ready to be used namely to simulate faults. Case of two cells In this section two single cells as those used above, are assembled together to build a little stack. The goal is to show how to generalize the modeling process from one single cell to a stack. This requires introducing the Z direction in the model. The used set-up is the same as the single cell (Figure.4.19) but by involving two cells of the MES fuel cell instead of one. Temperature distribution in z axes 24 thermocouples type K have been chosen (each cell 12 thermocouples that installed at cathode sides). It is noted all have been calibrated before usage (as explained before). Furthermore, 24 voltage sensors are selected for this test. Each voltage and temperature reading is recorded by the data acquisition at 3 Hz over a period of 5 minutes. Because, the temperature stabilization need to temperature remained in constant values at minimum 5 min. Accuracy, minim error and precision of results, every test is repeated 2 times for each operating condition. Further, the local current density profiles with various operating conditions are obtained in different operating conditions such as: various current loads (5A, 10A and 15A), air and fuel stoichiometry ratios. Voltages distribution in z axis The Figure .4.40 illustrates the voltage curves insides two cells from the inlet to outlet with different oxygen stoichiometry (3, 4 and 5) and the hydrogen stoichiometry fixed at 2. The voltage sensors are placed in the cathode and right side of each cell. It is seen that the voltage from the inlet is lower than the outlet and have the same inherent of the temperature curves. It is seen that the highest temperatures and voltage in the profiles are recorded at the outlet of each cell (as expect from single cell in y axes). This increasing of voltage is based on augmentation of temperature (see Figure.4.39). Cell one: red Cell two: blue Calibration and validation for two cell In order to calculate impedances in different directions in one cell, the calculation of the local current density has to be defined at each node based on using Newton Raphson method (see in Table.4.11). In additional, the local current density calculations by Newton Raphson can be summarized in Table.3.7. Furthermore,comparisons .0024 In order to check the validity of the so obtained model a simulation polarisation cureve of two cells has been performed. The Validation with one complete PEMFC In this section a MES PEMFC stack is used to validate the proposed model in healthy mode. The same validation process used above is applied in the case of this stack. Set-up description The used PEMFC is shown in Figure .4.42 where the stack and the Electronic Control Unit (ECU) of the system are focused. The latter allows supervising and controlling the FC system through a PC using software provided by the FC manufacturer. The FC system is loaded with an electronic controlled load able to simulate the current profile [4.3]. Temperature measurements The fuel cell was tested in the climatic room at different ambient temperatures between 10°C and 30°C (see Figure.4.43). The purpose of this test is to calculate the constant parameters in thermal equation (Eq.3.7). Voltage measurement and validation The measurements of temperature above are used for calibrating the 3D electrical model under Matlab/Simulink software. The A second comparison has been performed between the proposed simulation 3D model and experimental tests on the stack of the FC under test. The obtained results are illustrated in the Figure4.48. The small differences observed between the simulation and experimental results can be caused by two phenomena that happen during the experimental test. First, a hydrogen purges function which is used to eliminate the water impurity at the hydrogen side. During this operation, the hydrogen value is open periodically. The duration of purge is included in the range 0.15-1 second. Second, the short circuit function is used to increase performance of the system. Short circuit happens every 20 second for duration of 50 milliseconds. According to the results above, one can conclude that the proposed 3D model is valid for the simulation of PEMFC in healthy modes. This result opens interesting prospects to use the model for simulating the faults within the FC stacks notably for diagnosis purpose. Experimental validation of the model. Experimental tests have been carried out in order to compare them with the multi-physical model. The latter consists of the equivalent circuit given in Figure 2.1 in which the physical parameters were computed according to rated characteristics of the studied fuel cell. The test bench is built around that FC and it uses a climatic chamber for the tests at different environmental temperatures (see Figure .4.49. Load profile. The technical characteristics of PEMFC used for modeling purpose are given in Table .4.12. The dynamic test has been carried out through a load profile calculated with a real driving cycle. The latter lasts 12549s and is given in Figure .4.50. The load profile is controlled by a programmable electronic load connected to the FC stack (Figure.4.51.a). The same profile is then applied to the model for the simulation. In addition, the same experimental physical conditions are used in the simulation (e.g. ambient temperature, stack current). Stack PEMFC Temperature and voltage measurement results As explained above, the fuel cell voltage depends on many parameters, such as temperature, current, etc. In this work, the effects of the temperature variation in the 3D directions of the stack fuel cell are considered. In fact, temperature and time during operation of the fuel cell will be changed. Specifically, Figure. 3D effect on stack voltage As illustrated in Figure .4.56 two sets of temperature measurements were taken the first one was along the y-axis (top, and bottom of the side edge of the cell). The second one was along the x axis (left, right on the top edge of each face). The third set was along the z axis (side faces of each cell). It must be noted that at the same place that the temperature sensor was placed, voltages were also measured in order to explore the relation between these parameters. Based on equations Eq.2.1, ..Eq.2.7 in chapter II, all parameters are directly related to temperature. In other words, the temperature distribution in different parts of the fuel cell has direct influence on the increase and decrease of the fuel cell output voltage. In these figures, owing to the location of the air intake, distribution of current density and other parameters, temperature in the middle of any direction (x, y and z) is warmer than other parts. Calibration of 3D model for two cells In order to calculate impedances in different directions in stack cell, Newton Raphson method has been used. For this aim voltage and temperatures were measured in specific points of the stack FC (Because of the unreachable install thermocouple and voltage sensors in all place of FC). The voltage and temperature measured in experimental test as shown in Table .4.14. The parameters of impedances calculated by the NR method are given in Table.4.15. It compares the variation of impedance in response to the air cooling system (Variable temperature) and current density inside the cell parameters (chemical process such in particular activation losses…). Only magnitudes of impedances are considered in this paper. Circuit Parameters Impedance [Ω] cell-1 Impedance[Ω] cell-2 R 1 12[Ω] 0.0577 R 2 12[Ω] 0.2300 R 1 35[Ω] 0.1007 R 2 35[Ω] 0.2107 R 1 25[Ω] 0.8286 R 2 25[Ω] 0.1691 Impedance [Ω] Interface between cell one and two R 12 11[Ω] 0.1237 R 12 55[Ω] 0.4236 Synthesis on validation with Nexa stack PEMFC All tests have been done for Nexa stack such as temperature and voltage. Also based on the measurement calibration of two cells of this FC have been carried out. However at this stage of study, the 3D model for Nexa stack needs to be finalized by updating all the model parameters starting from the FC characteristics. In fact a lot of temperature and voltages results are available for several 3D positions within the FC stack. It is expected to use all these data in the near future for the 3D fault sensitive modeling of the Nexa FC. Conclusion In this chapter, two PEM fuel cells (MES FC and Nexa FC) have been considered to analyze the behaviors of the cell voltage and temperature distributions under various operating conditions. The measurements obtained allow validating the proposed 3D model first on one single cell, second on two cells and finally on a complete stack of the MEM FC. The single and double cells allowed validating the 9 nodes model while the complete stack validated the one stack model. Otherwise, the validation stills to be finalized on the Nexa FC. It can be concluded that the main hypothesis of the proposed multi-nodes circuit approach which assumes different potentials and currents density at each point of the FC stack, is valid. Second, the developed 3D model is valid for simulating the PEMFC operation in healthy mode. Thus, it can be used for introducing different faults to study the behaviors of the distributions of voltages and currents in X, Y and Z directions of the stack. The goal is to characterize faults for diagnosis purpose. The next chapter will explain how the proposed model can be used in the fault diagnosis of the PEMFC for automotive applications. X direction T[°C ] V[V] Y direction T[°C] V[V] General Diagnosis Strategy of FCEV drive trains With increasing demands for efficiency in vehicles applications and safety-critical processes, the field of fault detection and fault diagnosis plays an important role. During the last few decades theoretical and experimental research has shown new ways to detect and diagnose faults. One distinguishes fault detection to recognize that a fault happened, and fault diagnosis to find the cause and the location of the fault. Advanced methods of fault detection are based on mathematical signal and process models and on methods of system theory and process modeling to generate fault symptoms. Compared to other electrochemical power devices such as the battery, the PEMFC is much more complicated. Its complexity derives from the following aspects [5.1]. 1. Three-dimensional architecture is vitally important to performance and durability, due to the large size of PEM fuel cell stacks. 2. Local performance can seriously affect the system's performance and durability. 3. There are complicated operating conditions, such as load, temperature, pressure, gas flow, and humidification. A further important field is fault management or asset management. This means to avoid shutdowns by early fault detection and actions like process condition-based maintenance or repair. If sudden faults, failures or malfunctions cannot be avoided, fault-tolerant systems are required. Through methods of fault detection and reconfiguration of redundant components, breakdown, and in the case of safety-critical processes, accidents, may be avoided [5.2]. The diagnosis process developed within this thesis concerns the drivetrains of the FCEV (see chapter 1,Figures 1.18). Among the components of the drive train of FCEV explained (PEMFC, Batteries, DC/DC and DC/AC converters and electrical motors), the fuel cell is the most fragile. This is why the part of the drivetrain, which is focused through this study, consists in the PEMFC. Regarding to whole the vehicle, the drivetrain becomes just one subsystem among many other subsystems. Thus, the faults can be divided in different levels according to the depth of their location within the vehicle (see Table.5.1). In this section, we explain this scheme around our thesis work whose topics a green frame points out. Level one This diagnosis level takes care about the feature of the main systems inside the vehicle. These devices generally called subsystems or main components include the powertrain, the embedded grids (or the electrical harness), the ICE, the steering column, wheels, etc. In this level 1, a fault is detected through a basic supervision algorithm in which sensors send to the main Electronic Control Unit (ECU) the State of Operating of each Subsystem (SOS). Thus a Boolean supervision algorithm (true/false) sends a fault signal and indicates the subsystem in which the false input is detected (seeFigure.5.2) [5.3]. indicate accurately which component of the drivetrain is in fault i.e. the PEMFC, the DC/DC power converter, the battery system, the DC/AC power converter or the motorization device. This is achieved thanks a classification of faults done starting from the electrical, thermal and mechanical measurements of the drive train. At this level 2 the subsystem in faulty mode is detected by the ECU. The next step is to go deeper in the analysis of fault signals to know more on the fault occurred and to evaluate the SOH of the faulty subsystem. Level two Level three When an unusual behavior is identified in on subsystem, fault diagnosis strategy is to evaluate the fault severity and its impact on the subsystem performance. The loss of performance can be expressed through a function between zero and one traducing the actual State Of Health (SOH) of the subsystem. That parameter can be used in a control algorithm in degraded mode of the drivetrain. To evaluate efficiently the parameter SOH it is necessary to have a deep knowledge on the corresponding system. In the framework of this thesis, this study has been developed on fault diagnosis on the source of the drive train (Fuel cell). This was achieved through the 3D fault sensitive modeling of PEMFC and training of model based on ANN for fault diagnosis and SOH computation (see Figure. A common consequence of the PEMFC failures is the voltage. In fact, if a fault occurs in FC, the voltage can be either increase or decrease according to the fault [5.4]. In summary, FC stacks voltage is a first indicator of a degraded working mode. Different categories of faults in PEMFCs are likely to occur during the operating conditions [5.5]. Water management and temperature are effects crucially important for healthy operation of a PEMFC. Artificial Neural Network for diagnosis of PEMFC In this work, a two layer feed-forward ANN has been used in order to classify. There are no general rules to determine the number of hidden layers and hidden nodes; this also depends on the complexity of the mapping to be achieved. The number of inputs (input nodes) and output (output nodes) is of course determined by the specific problem. The number of neurons and connections limit the number of patterns a neural network can store reliably [5.6]. A comprehensive investigation about ANN structure and application is presented in appendix 5A. Fast Fourier Transform (FFT) DC load current contains some typical harmonics identical to those one can find in a DC/DC boost converter generally associated to the PEMFC. FFT algorithm is applied to compute the 7 The DFT is extremely important in the area of frequency (spectrum) analysis because it takes a discrete signal in the time domain and transforms that signal into its discrete frequency domain representation. An FFT computes the DFT and produces exactly the same result as evaluating the DFT definition directly; the most important difference is that an FFT is much faster. (In the presence of round-off error, many FFT algorithms are also much more accurate than evaluating the DFT definition directly. It is the speed and discrete nature of the FFT that allows us to analyze a signal's spectrum with Matlab. Matlab's FFT function is an effective tool for computing the discrete Fourier transform of a signal. Modelling method for on-line FC diagnosis ANN Based 3D Fault Classification in the PEMFC single cell The PEMFC dynamic 3D model is built based on experimental results and simulation using MATLAB software. With this model and experiments, the mechanisms of different faults in PEMFC systems are analyzed. ANN is applied for the fault diagnosis and classifications. ANN is trained by data coming from FFT algorithm according to an analysis of the variation of operating conditions in the fuel cell. Fault detection in each step based on operating conditions has been simulated. The classification of flooding and drying is illustrated in Table .5.3 For the too dry faults all possibilities of humidity between 30% to 50% with constant pressure (at the cathode and the anode side) and constant temperature 65 °C have been simulated. FFT algorithm has been used to express the output voltage in each node and output voltage in the cell. The measurements of the mean value and the first seven harmonics of the output voltage, in the steady state operation, allow computing the corresponding Harmonic Distortion Rate (HDR) and the mean values voltage variation according to the healthy value (MVV). Hence, data from FFT in each operating conditions based on the values given, serve for training the ANN for fault diagnosis and classification. In order to calculate impedances in different directions in one cell, the calculation of the local current density has to be defined at each node based on Newton Raphson method (see Table5.6). In addition, the local current density calculations by Newton Raphson can be summarized in Table .5.5. Furthermore, comparisons of the current distributions between different current profile loads are shown in this table. To train the neural network all impedances are used in the 3D sensitive model. Drying and flooding faults are divided into 4 types as shown in Table .5.2. In order to isolate faults, each node can be simulated for different operating conditions. In this work the temperature of 62°C and humidity of 30% is relative to the Too Dry. The temperature 55 °C and humidity 120 % are assumed for too flood. Besides, temperature 45°C, 55°C and 110%, 50% humidity is presumed for flood and dry faults respectively. To obtain these conditions the pressure of hydrogen and oxygen inlet are adjusted 1.5 bar and 2.2 bar respectively. .9 show the current distributions in different current load profile and different faults in all nodes of the cell. Actually, Table .5.8 shows that the local current density at 10 (A) in faulty mode (too flood) increased from 0.02 to 1.07 A/cm 2 from outlet to inlet. These variations for flooding fault decrease approximately from 0.02 to 0.11 A/cm 2 . However, for too dry and dry fault local current densities decreased from 0.5 to 0.29 and 0.4 to 0.17 in outlet to inlet respectively. It could be caused by temperature variations changed in the cell. As explained previously temperature distributions are inhomogeneous distributed. Because of the heat transfer, changing the pressure of the inlet and outlet happened. In additional, these variations in current load profile of 5 (A) is not noticeable, because of temperature distribution in this level of current is not effected a lot. ANN Faults classification in stack of 2 cells In this work isolation and classification of faults in the PEMFC are divided in two steps: Step 1: Isolate fault and classified the fault in stack that means detected the faults in each cell. Step 2: Faults isolation and localized the faults in each cell. The dry faults of 567 numbers of samples contain 7 numbers of harmonics in 9 nodes for 9 classes. Each class is chosen by variations operating condition based on Table .5.2. Faults Classification in stack These figures show that 7 harmonic attributes will be used as inputs to the neural network and the respective target for each will be 9 classes. Data for classification are set up for the neural network by organizing the data into 9 matrixes the input matrix X and the target matrix T. Each column of the input matrix will have 9 elements representing 9 nodes. That means, each corresponding column of the target matrix will have 9 elements consisting of 9 classes. In these figures the confusion matrix shows the percentage of correct and incorrect classifications. Correct classifications are green squares on the matrix diagonal. Incorrect classification is the red squares. Furthermore, the blue cell at the bottom right shows the total percent of correctly classified cases (in green) and the total percent of incorrect class cases (in red). The results show very good recognition and in this case, the network response is satisfactory. . The neural network architecture appropriate to solve classification problems is the feed forward one characterized by:  An input layer having as many units as attributes are in the data;  One or several hidden layers (as the number of hidden units is larger the model extracted by the neural network is more complex -but this is not necessarily beneficial for the problem to be solved);  An output layer having as many units as classes. Conclusion In this chapter, the study focused on a power source consisting of a PEM Fuel cell in the power train. A new model was proposed to improve the lifetime and reliability of the power train and to detect online faults. Fault classification and isolation are implemented in the stack fuel cell and one cell. Besides, Current distributions in different points of the cell based on varying operating conditions are calculated by the Newton Raphson method. These variations cause Drying, Flooding, Too dry and Too flood in the cell. Current density distributions localized in each step of current and faults. The ANN method has been used to develop diagnosis based on 3D sensitive models for fault isolation in one cell PEM. The input data of the ANN were analyzed by the FFT method. The ANN advantages consist in their ability to analysis a large quantity of the data and to classify the faults in terms of their types. The AAN are used for classification and isolation the faults. Data for classification are set up for the neural network by organizing the data into 9 matrixes the input matrix X and the target matrix for 9 classes. The results show very good recognition and in this case, the network response is satisfactory. General Conclusion FCEV is considered by public and private research organizations, as one of the most suitable solution for clean transportation. Indeed, the use of hydrogen produced by water electrolysis using renewable energy sources, combined with a FC proton exchange membrane, allows completely green energy cycle. Hydrogen production and distribution technologies as well as FC ones as enough mature to be economically viable. Many automakers (Daimler, Honda, Chevrolet, Hyundai, Ford) already propose vehicles whose performance are comparable to internal combustion vehicles (500 km range, 130 CV, 160km/h-max). One of the main still existing locks on the way the marketing of these vehicles is the reliability of their drivetrains that has to be increased so that to be competitive regarding the conventional vehicles. The FCEV drive trains contain the PEMFC, Batteries, DC/DC converters, DC/AC inverters and electrical motors. Among the drivetrain components, the PEMFC is the more fragile. Indeed, its performances are affected by different operating conditions such as temperature, pressure, humidity and current density. The latter influence cost, output power, energy efficiency, reliability and lifetime of the PEMFC. Thus, Understanding the operating modes will be very useful for enhancing the lifetime of the system. To meet this objective a 3D model has been developed for modeling and simulation of a PEMFC. A circuit approach has been used to allow to easily take into account the three dimensional aspect of the PEMFC stack. Also, this kind of model offers the possibility to include through a parameterization process all the environmental conditions namely, the temperature, gas pressure and stoichiometry and humidity. It has been shown that the propose model is able to simulate the PEMFC single, double and multi cells in normal operation conditions (healthy mode) but also in faulty operating conditions (faulty mode). Thus the model has been used to train an ANN based model for on line diagnosis purpose. The model principle as well as the used process for its establishment has been explained in details. It has been shown that the experimental tests are combined to the theoretical formula to calibrate and validate this model. In the calibration process, the Newton Raphson method has been used to find the physical parameters of the model. In such calibration the temperature and voltage distribution in the FC stack were considered for different operating conditions in terms of current load and stoichiometry of oxygen and hydrogen. For experimental study, two PEMFCs have been considered to analyze the behaviors of the cell voltage and temperature distributions under various operating conditions. The measurements obtained allow validating the proposed 3D model first on one single cell, second on two cells and finally on a complete stack of the PEMFC: The single and double cells allowed validating a 9 nodes model while the FC system validated the one stack model. Thus, the 3D validated model can be used for introducing different faults to study the behaviors of the distributions of voltages and currents in three space directions of the stack in the purpose of diagnosis of the FC. In this framework, an ANN based model has been also developed to classify the different faults. The input data of the ANN were analyzed by the FFT method. Data for classification are set up for the neural network by organizing the data into 9 matrixes the input matrix X and the target matrix for 9 classes in a single cell. The results show very good recognition and in this case, the network response is satisfactory. Block Diagram A dynamic model for the PEM fuel cell has been developed in MATLAB/SIMULINK, based on the electrochemical and thermodynamic characteristics of the fuel cell discussed in chapter III. The fuel-cell output voltage, which is a function of temperature and load current, can be obtained from the model. Effect of Air stoichiometry on temperature distribution along channel The effect of the air stoichiometry ratio of the temperature distribution along the channel for three different current values 10 (A) and ( 15 .7. and Figure.4.A.8. This phenomena can be attributed to the higher electrochemical activity taking place over the MEA surface as a result of decreasing the cell potential. This is the most important point to be noted about the effect of the temperature on the cell voltage. This relationship can be useful to study the fault diagnosis for drying and foolding. Static and Dynamic Artificial Neuron Models: Adaptive Function Estimators General: a very detailed description of the artificial neuron is given below, since this is absolutely necessary for a good mathematical and physical understanding and for all those who also wish to develop other types of ANM (e.g. a fuzzy-neural model, minimum architecture neuron model, etc.). ANNs are based on crude models of the human brain and contain many artificial neurons (computational units) linked via adaptive interconnections (weights) arranged in a massively parallel structure. They are artificial 'entities' that can actually learn from given data sets (they estimate functions from datasets). In other words, they are adaptive function estimators which are coarse simulators of a biological neural network in a human brain. It is a very important feature that a suitable ANN is capable of learning the desired mapping between the input signals and output signals of the system under consideration, without knowing the exact mathematical model of the system in this sense, the ANN is a numerical, trainable, modelfree adaptive estimator (similar to a fuzzy system). Since the ANN does not use a mathematical model of the system under consideration, the same ANN configuration and dynamics can be applied to many problems. A human brain can perform an extremely large number of different operations. There are a number of different ANNs that try to mimic many of these features. Similarly to the human brain, the basic element of an ANN is a single computational neuron, which is basically a multi-input usually nonlinear processing element with the weighted interconnection of the neuro-biological process of a human brain neuron; it is possible to obtain a relatively simple artificial neuron model which gives a good representation. A simple model of the so-called 'static' artificial neuron has four main parts:  Input(s);  A weighted summer;  A non-dynamic function ( so-called 'activation function', which is also sometimes referred to as a transfer function ) and in most of the applications is non-linear (there are also ANN models with use linear functions)  Output(s) It must be noted that this neuron model is also referred to in the literature as the perceptron neuron, but strictly speaking, by considering its original definition. It should only be called the perceptron if a spatial form of activation function is used (e.g. where the activation function is the hard-limit function). It can seem that the static artificial neuron model does not contain dynamics. However in a so-called 'dynamic' artificial neuron model, in addition to the four main parts described above, the activation function block is followed by a dynamic block. This dynamic block can be represented by a simple delay element (first-order low-pass dynamic block). Figure .5A.1 shows the basic model of a single static artificial neuron (AN), which is the ith neuron in an artificial neuron network containing many neurons. Although in the simplest neuron model there is only one neuron, in general there are n inputs to a general ith neuron as shown in Figure .5A.1, these are x 1 (t), x 2 (t), x 3 (t),..., x n (t). These can be considered the elements of the n-dimensional input vector x(t)=[x 1 (t), x 2 (t), x 3 (t),..., x n (t)] T . The neuron output is the scalar quantity y i (t) . The neuron contains an aggregation operator, which e.g. Where w ij are the connection weights (interconnection strengths) between the ith neuron and the jth inputs and i b is a constant (which is often called a bias or threshold of the activation function). It follows that the inputs are transmitted through the connection weighted, whereby the weights are multiplied by the inputs and the weighted sum is added, and the net value (S i ) is obtained by adding to this the bias (b i ). Finally, the output of the neuron (y i ) is obtained by using the neuron activation function (f i ), thus y i =fi(S i ) as shown in Figure .5A.1. Input Bias the input to a neuron has two sources: external inputs and internal inputs, the latter are inputs from other neurons. It should be noted that by also considering the bias input in Figure .5A.1, there are in total n+1 inputs and the threshold has been incorporated by employing input x 0 =1 and using a corresponding weight of b i . Thus the bias is imply added to (not that now the starting value for j is 0 and not 1 as before) where x 0 =1 and w 0j =b i . In this case, the bias in like a weight, but it has a constant input of "1". This is the main reason why one of these two very similar static neuron models can be found in various publications. A neuron fires if the weighted sum of its inputs exceeds the (threshold) bias value. In ANN, b i can be set to be a constant or a variable (which can change like the weights), since in the latter case there is added flexibility for the network. An ANN with biases can represent input-to-output mapping more easily than one without biases, e.g. if all the inputs to a neuron are zero (x j =0, j=1,2,….,n), a neuron without a bias will have a net input: 0 ) ( 1 1         j n j j i i j n j j i i x w b t x w s Thus the activation function becomes a single value f (S i ) = f (0), (which depends only on the activation function employed). However the same neuron has a bias, the input is Si=bi and thus the activation function becomes f (S i ) =f (b i ) , which (for a specific activation function) can have any value, depending on the bias. This results in greater flexibility. In Figure .5A.1 the neuron also contains a non-dynamic activation function, f (S i ). The reason for the use of a non-linear activation function is to deliberately introduce non-linearity in the neuron model, since this makes the network capable of storing strong non-linear activation function were not incorporated into this model. Then the artificial neuron would represent a linear system, which could not be used for the mapping of a non-linear system and cannot suppress noise, so the linear network would not be robust. However, it should be noted that there exist neuron models, with linear activation function, but these can only be used for the modeling of linear systems. ) ) ( ( ) ( 1 i j n j j i i i i i i b t x w f s f y a       There are various types of f i activation function (a mathematical function), which can be used in ANNs. However, non-linearity and simplicity are the two key factors for the selection of a specific activation function. Furthermore, since some training techniques (e.g. the back propagation technique) require the first derivative of the activation function ( f ), when they are used in an ANN using such a learning technique, then the activation function must be differentiable. ANNs, single layer and multilayer Feed forward ANNs Neural network systems consist of parallel distributed information processing units with different connecting structures and processing mechanism. They have a large variety of applications in engineering such as function approximation, pattern recognition and etc. The architecture of a neural network specifies the arrangement of the neural connection as well as the type of units characterized by an activation function. The processing algorithm specifies how the neurons calculate the output vector for any input vector and for a given set of weights. The training algorithm specifies how the NN adapts its weights w with all given input vectors, called training vectors. Thus, the neural network can acquire knowledge through the training algorithm and store the knowledge in synaptic weights. The most common NN used ones are the multi-layer feedforward networks, as, a three-layer network (input, one hidden and output layers) as shown in Figure 5A.3. Single Layer ANN The neurons are the building blocks of an artificial neural network. In a so-called single-layer feedforward ANN there is at least a single artificial neuron of the type discussed in the previous section. As shown in Figure .5A.3 , in general , there can be n inputs X = [x 1 , x 2 , …, x n ] T and k neurons in the single layer of the ANN, where in general k ¹ n, and an input is connected to each neurons (input) through the appropriate weights. Each neuron performs the weighted sum of its inputs plus the bias and applies this to its activation function. It follows that there are k outputs y 1 = [y 11 , y 12 … y1 k ] T to the ANN described by a single layer (where the index 1 in y 1 outputs on the first layer are y 11 , y 12 … y 1k ) and: ) ( 1 1 1 1 B X W F y   In this expression F 1 is the activation matrix of this single layer, which is a diagonal matrix with k elements, and which depends on the net inputs to this layer: , where b11, b12, …, b1k are the biases of nodes 1, 2, …, k of the output layer respectively. An ANN with a single layer can be used with only a very limited number of systems and it cannot represent all nonlinear functions. When the activation functions in a single layer ANN are hard-limit functions, the so-called single layer perception model arises. Although this can be used for certain types of classification problems, science the hard limit function divides the input space (the space defined by the input vector) into two regions, and the output will be 1 or 0 depending on the input vectors. However in fact, there can be only two different output values is a great limitation. Furthermore, the single layer perceptron cannot learn the mapping of such systems, whose input space is defined by linearly un-separable vectors. It is sometimes convenient to have a geometrical interpretation of this: if the input space contains linearly inseparable vectors, then a straight line or plan which separates the input vectors in the input plane cannot be drawn on the input plane between the input vectors. When a single layer network uses linear activation functions, where the single layer perceptron cannot train the network, so that it has linear neurons and called Widrow-Hoff neurons of ADALINE neurons (Adaptive Linear Neurons), then the resulting network using adaptive learning is called the ADALINE network or MADALINE network for many ADALINES. Multilayer ANN The neurons are the building blocks of an artificial neural network containing many layers. In a multilayer feed forwards ANN, the neurons are arranged into several parallel layers. The connection of several layers results in a network which has the possibility of more complex non-linear mapping between the inputs and outputs this can be used to implement classifiers and associates to represent complex non-linear relations among variables. In a multilayer artificial neural network the neurons of layer 0 (input layer) don't perform computation (processing), but only feed inputs to the neurons of layer 1 with is called the first hidden layer. There are no interconnections between the nodes of the same layer. Layer 1 can be followed by a second hidden layer (layer 2). In theory there could be any number of hidden layers, but this would significantly increase the complexity of the training of the network and also network with one or two hidden layers appear to provide adequate accuracy, robustness and generalization in many cases. If there is only a single hidden layer satisfactory performance can be obtained by using the non-linear activation function only in the hidden layer, and linear activation functions in the output layer when contrasted to the network with a single hidden layer, the network with two hidden layers may provide higher accuracy at a low cost (fewer processing units). In the ANN with two hidden layers, the last layer (layer 3) is the output layer. In general, layers between the input and output layers are the hidden layers. Each neuron is connected to all neurons of the adjacent layers and to no other neurons. Connections within a layer are not permitted. Generally there are different numbers of neurons and different weights for different hidden layers. There are no general rules to determine the number of hidden layers and hidden nodes; this also depends on the complexity of the mapping to be achieved. The number of inputs (input nodes) and output (output nodes) is of course determined by the specific problem. The number of neurons and connections limit the number of patterns a neural network van store reliably. In a multilayer ANN the activation functions in the output layer can be linear functions, since the network is able to represent a non-linear system by using non-linear activation functions in the hidden layer(s). For illustration purposes Figure .5A.4 shows the schematic of three-layer feed forward ANN. The term 'feed-forward' refers to the fact that the arcs joining the nodes are unidirectional. Such a network is also referred to as a multilayer perceptron also strictly speaking this should only be the terminology used for the same network, when the activation functions are hard-limit functions (see the definition of the perceptron above). It should be noted that in the literature sometimes such a network called a four layer network, corresponding to the fact that there are four layers of nodes (for the input, hidden 1, hidden 2 and output layers). However the network has only three layers of processing neurons and therefore such a network is also sometimes referred to in the literature as a three layer network. If the latter definition is applied, the term 'layer' refers to the actual number of existing and processing layers. This convention is more logical, since the input nodes (in the input layer 0) don't perform computation. As a result, this is equivalent to saying that the ambiguity could be totally removed by considering such a definition where there is no terminology of an input layer and the layer and the layer to which the inputs are directly connected to is the first layer (first hidden layer).it is then very clear that the number of layers in such a network is equal to the number of hidden layers plus 1. It should also be noted that when this definition is used, an N-layer network has N-1 layers of hidden neurons, whose outputs are not directly accessible. As a consequence, the errors (difference between desired value and actual value) at these outputs are not known directly. They can be obtained by first determining the errors at the output layer, and then by backpropagation these. In general, multilayer artificial neural networks can be considered as versatile non-linear maps with the elements of the weight matrices (weights) and bias vectors as parameters. In the ANN shown in figure.5A.4 there are n inputs, there is one output layer (OL) with M output nodes and there are two hidden layers (HL1, HL2). In general in each of the layers there can be different number of nodes and all nodes in a given layer are connected to all nodes in the next layer, but there are no inter connections between the nodes of the same layer. The number of input correspond to the number of physical characteristics that are considered to be important for the neural network and the number of output nodes is equal to the number of output quantities to be determined. As discussed above, in general there can be several hidden layers, but often due to the computation burden, this is limited to one or two hidden layer. According to the universal approximation theorem one hidden layer is sufficient to perform any non-linear input-to-output mapping but the theorem doesn't give the number of hidden neurons and doesn't say if a single hidden layer would be optimal in the sense of ease of learning. These can make the training of the network sometimes difficult and in supervised ANNs may necessitate trial-and-error-based computations aimed at obtaining an ANN with optimum of hidden layers and hidden nodes. In recent years, according to the upcoming challenge of pollution, fuel saving, to use on FCEV is increasing. It can be that fuel cell power train divided in the PEMFC, Batteries, DC/DC converters, DC/AC inverters and electrical motors. The Proton Exchange Membrane Fuel cells (PEMFC) have consistently been considered for transportation application. Characteristic features of PEMFC include lower temperature (50 to 100 °C) and solid polymer electrolyte membrane. In this work, experiments have shown that the temperature distributions can significant influence on the performance of the PEMFC. Also analytical studies have indicated improvement of ionic resistivity of the electrolyte membrane, kinetics of electrochemical reaction and gas diffusion electrodes have directly related to temperature. This work evaluated the effectiveness of temperature on a single and stack fuel cell. In addition, a 3D model is developed by effective of temperature on performance on the fuel cell. In this thesis, two PEM fuel cells have been considered to find out the relationship and analyze the behaviors of the cell voltage and temperature distributions under various operating conditions. An experimental study for voltage and temperature has been executed, using one cell, 12 thermocouples and 12 voltage sensors have been installed at different points of the cell. In this work a new model was proposed to improve the lifetime and reliability of the power train and to detect online faults. Besides, current distributions in different points of the cell based on varying operating conditions are calculated by the Newton Raphson method. On the basis of the developed fault sensitive models above, an ANN based fault detection; diagnosis strategy and the related algorithm have been developed. The identified patterns ANN have been used in the supervision and the diagnosis of the PEMFC drivetrain. The ANN advantages of the ability to include a lot of data made possible to classify the faults in terms of their type. Résumé : Ces dernières années, la pile à combustible à membrane échangeuse de proton (PEMFC) a fait l'objet d'un intérêt particulier pour des applications liées au transport. De par le fait qu'elle fonctionne à une température de fonctionnement relativement basse (50-100°C) combiné à une membrane polymère solide empêchant tout risque de fuite. Dans ce travail, des expérimentations ont été effectuées pour démontrer que la distribution de température à une influence significative sur les performances de la PEMFC. Par ailleurs, ce travail comporte une analyse ayant pour but de d'indiquer une amélioration de la résistivité ionique de la membrane, de la vitesse de réaction et de la diffusion des gaz en fonction de la température. Des expérimentations sur une cellule puis sur un stack complet ont permis d'évaluer l'impact de la température à l'aide d'un modèle 3D développé simulant les performances de la pile en relation avec la distribution de température. Dans cette thèse, deux piles à combustible ont permis de valider le comportement et d'en déduire une relation entre la tension de sortie et la distribution de température dans différentes conditions de fonctionnement. Une étude expérimentale prenant en compte la tension et la température a été effectuée sur une cellule en mesurant la température et le voltage en douze points à l'aide de thermocouples et de sonde de tension. Le modèle 3D proposé permet ainsi d'améliorer la durée de vie d'une pile ainsi que sa fiabilité, il permet aussi d'effectuer un diagnostic et de détecter en ligne un défaut. Ceci est effectué en calculant la densité de courant localement à différentes conditions de fonctionnement en utilisant la méthode de Newton Raphson. De par le développement de ce modèle sensible à un défaut, un algorithme de détection de défaut ainsi que la stratégie de diagnostic ont été développé en utilisant des réseaux de neurones artificiels (RNN). Ces derniers ont été utilisés pour la classification supervisée de défaut permettant ainsi le diagnostic. Figure. 1 . 3 . 13 Figure.1.1. Energy power source from 1949-2011 ........................................................................................ Figure.1.2. Total consumption by sector, 2011. ............................................................................................ Figure.1.3. Daimler fuel cell electrical vehicle. ............................................................................................... Figure.1.4. Ford fuel cell electrical vehicle. .................................................................................................... Figure.1.5. GM fuel cell electrical vehicle. ...................................................................................................... Figure.1.6. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.7. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.8. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.9. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.10. ECCE test bed .............................................................................................................................. Figure.1.11. F-City H 2 test bed....................................................................................................................... Figure1.12. Mobypost vehicle ........................................................................................................................ Figure1.13. Passive cascaded battery/UC system. ...................................................................................... Figure1.14. Active cascaded battery/UC system. ........................................................................................ Figure1.15. Parallel active battery/UC system. ............................................................................................ Figure.1.16. Multiple-input battery/UC system. ......................................................................................... Figure.1.17. Multiple-input battery/UC systems. ........................................................................................ Figure.1.18.Fuel Cell Electrical vehicle. ........................................................................................................ Figure.1.19. (a) The electrolysis of water. The water is separated into hydrogen and oxygen by the passage of an electric current. (b) A small current flows. .......................................................................... Figure.1.20. Market for Fuel Cell Technologies. ......................................................................................... Figure.1.21. Single cell structure of PEMFC. ............................................................................................. Figure.1.22. Membrane electrode assembly. ................................................................................................ Figure.1.23. Gas diffusion layers.................................................................................................................... Figure.1.24. Bipolar Plate. ............................................................................................................................... Figure.1.25. System of the fuel cell ............................................................................................................... Figure.1.26. DOE has reduced the cost of automotive fuel cells from $106/kW in 2006 to $55/kW in 2013 and is targeting a cost of $30/kW ................................................................................................... Figure. 1 . 18 .Figure. 2 . 1 . 11821 Figure.1.1. Energy power source from 1949-2011 ........................................................................................ Figure.1.2. Total consumption by sector, 2011. ............................................................................................ Figure.1.3. Daimler fuel cell electrical vehicle. ............................................................................................... Figure.1.4. Ford fuel cell electrical vehicle. .................................................................................................... Figure.1.5. GM fuel cell electrical vehicle. ...................................................................................................... Figure.1.6. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.7. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.8. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.9. Honda fuel cell electrical vehicle. ................................................................................................. Figure.1.10. ECCE test bed .............................................................................................................................. Figure.1.11. F-City H 2 test bed....................................................................................................................... Figure1.12. Mobypost vehicle ........................................................................................................................ Figure1.13. Passive cascaded battery/UC system. ...................................................................................... Figure1.14. Active cascaded battery/UC system. ........................................................................................ Figure1.15. Parallel active battery/UC system. ............................................................................................ Figure.1.16. Multiple-input battery/UC system. ......................................................................................... Figure.1.17. Multiple-input battery/UC systems. ........................................................................................ Figure.1.18.Fuel Cell Electrical vehicle. ........................................................................................................ Figure.1.19. (a) The electrolysis of water. The water is separated into hydrogen and oxygen by the passage of an electric current. (b) A small current flows. .......................................................................... Figure.1.20. Market for Fuel Cell Technologies. ......................................................................................... Figure.1.21. Single cell structure of PEMFC. ............................................................................................. Figure.1.22. Membrane electrode assembly. ................................................................................................ Figure.1.23. Gas diffusion layers.................................................................................................................... Figure.1.24. Bipolar Plate. ............................................................................................................................... Figure.1.25. System of the fuel cell ............................................................................................................... Figure.1.26. DOE has reduced the cost of automotive fuel cells from $106/kW in 2006 to $55/kW in 2013 and is targeting a cost of $30/kW ................................................................................................... Figure. 3 . 1 . 31 Figure.3.1. Single cell MES PEMFC with different layers. .............................................................. Figure.3.2. Single cell PEMFC based on elementary cell in three dimensions (3D). ........................ Figure.3.3. Algorithm of calibration and using of the 3D model. .................................................... Figure.3.4. Top view of 3D electric model of one cell of a PEMFC. ............................................. Figure.3.5. Transverse view (x, y axis) of the anode side with 9 nodes and 20 different resistances. Figure.3.6. View of the interface resistors (z axis) between two FC cells. ........................................ Figure.3.7. Stack temperature according to load current in MES fuel cell. ....................................... Figure.3.8. Simulation result of the polarization curve in different double layer effect for MES PEMFC. ......................................................................................................................................... Figure.3.9. Simulation models of sensitive mode. ........................................................................... Figure.3.10. Polarization curve of MES fuel cell. ............................................................................ Figure.3.11. The electric circuit model for 3D of PEMFC. ............................................................. Figure.3.12. Front view of the 3D proposed model for PEMFC stack. ......................................... Figure.3.13. Perspective view of the 3D proposed model for PEMFC stack .................................Figure.3.14. Top view of the 3D proposed model for PEMFC stack ............................................Figure.3.15. Simplified model in three dimensions (3D) for two cells. ..........................................Figure.3.16. Relative humidity according to the stack temperature of the exit air of the FC with air stoichiometry of 2 ........................................................................................................................Figure.3.17. Fault diagram according to operating conditions of FC. ............................................ Figure.3.18. MVV and HDR variations according to resistance changes in Z direction. ...............Figure.3.19. MVV and HDR variations according to resistance changes in X direction. ...............Figure.3.20. MVV and HDR variations according to resistance changes in XY direction. ............. Figure. 4 . 1 . 41 Figure.3.21. MVV and HDR variations according to resistance changes in Y direction. ............... Figure. 4 . 19 . 419 Figure.4.1. Reactant air management with two parallel inlet pieces for oxygen feeding part. ......... Figure.4.2. Oxygen plate feeding. ................................................................................................. Figure.4.3. Components of single PEM cell. ................................................................................. Figure.4.4. Structure of the test bench. ......................................................................................... Figure.4.5. Hardware of control.................................................................................................... Figure.4.6. Interface panel. ........................................................................................................... Figure.4.7. Panel for Settings" the type fuel cell for test. ............................................................... Figure.4.8. Test bench structure. .................................................................................................. Figure.4.9. Electronic load for simulate of driving cycle. ....................................................................... Figure.4.10. Single PEM fuel cell and accessory. ........................................................................... Figure.4.11. Thermocouple type K. .............................................................................................. Figure.4.12. The relationship between the seek voltage and temperature. ..................................... Figure.4.13. DPI 620 advance measurements devise. .................................................................... Figure.4.14. Process of the calibration of thermocouples with reference thermocouple. ............... Figure.4.15. The Cannes Pyrometrques type 14-reference thermocouple. ..................................... Figure.4.16. Comparing the reference and 12 thermocouples in different temperatures. ............... Figure.4.17. Voltage sensors directly connect to the graphic block in cathode and anode side. ..... Figure.4.18. Test bench structure with thermocouple and voltage sensors. ................................... Figure.4.19. Set up measuring the temperature distribution in PEMFC (MES). ............................ Figure.4.20. Boundary limitation for choice of acceptable sensors. ............................................... Figure.4.21. Schematic of one cell with thermocouple and voltage sensors. .................................. Figure.4.22. Test conditions on load current and oxygen stoichiometry. ....................................... Figure.4.23. Set up measuring the Voltage in PEMFC (MES). ...................................................... Figure.4.24. Temperature distribution for the cathode side "O 2 stoichiometry ratios of 3, 5 and 7; Current load 5 A, H2 stoichiometry of 1.5". ................................................................................. Figure.4.25. Temperature measurement for various load current, O 2 stoichiometry ratios of 3, 4, 5 and 6; H 2 stoichiometry of 2". ...................................................................................................... Figure.4.26. Schematic of positions of thermocouple in cell. ........................................................ Figure.4.27. Temperatures are measured for various loads current 5A, 10A and 15A, stoichiometry of 3 for O 2 2 for H 2 . ..................................................................................................................... Figure.4.28. Local temperature distributions along the cell at the cathode side, Opeation condition : stoichiometry of 3,4 and 5 for O 2 2 for H 2 . ..................................................................................Figure.4.29. Voltage distributions along the cell with operation condition : O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ......................................................................................................................Figure.4.30. Local temperature distributions along the two axes (x and y) at cathode side for various loads current: O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ............................................................. Figure.4.31. Local temperature distributions along the two axes (x and y) in PEMFC at cathode side for various loads current: O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ........................................ 4 . 32 . 432 Figure.4.32. Temperature distribution over the cell in the x axes for various loads current: O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................................................... Figure.4.33. Schematic of the PEM fuel cell in x and y axes in different region "inlet, middle and outlet ". ........................................................................................................................................ Figure.4.34. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5........................................................................... Figure.4.35. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ............................................................................ Figure.4.36. Temperature distributions y axes. .............................................................................. Figure.4.37. Voltage distribution over the cell for different current load 5 (A), 10 (A) and 15 (A). O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................................................... Figure.4.38. Validation simulation and experimental test for one cell. ........................................... Figure.4.39. Temperature measurements in cathode side of PEMFC in different current density cell temperature with O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................... Figure.4.40. Voltage measurements in cathode side of PEMFC in with O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ................................................................................................................................ Figure.4.41. Validation simulation and experimental test for two cell. ........................................... Figure.4.42. The MES FC system used for validating tests. ........................................................... Figure.4.43. Test bench of fuel cell in the chamber room. ............................................................ Figure.4.44. Position of the thermal sensors. ................................................................................ Figure.4.45. Current dynamic profile of FC under test. ................................................................. Figure.4.46. Comparison thermal equation between analytical equation and three points in experimental test. ......................................................................................................................... Figure.4.47. The cell voltage versus current for MES PEMFC. ..................................................... Figure.4.48. The comparison voltage between experimental test and simulation according to load dynamic profile. ........................................................................................................................... Figure.4.49. Test bench of experimental tests on the FC: (a) laboratory, (b) climatic chamber. ...... Figure.4.50. Load profile. ............................................................................................................................. Figure.4.51. (a) Validation of electrical model, (b) validation of thermal model. ................................ Figure.4.52. Comparison of polarization curves for different temperatures. ....................................... Figure.4.53. Test bench instrument devices. ................................................................................. Figure.4.54. Thermal impact on PEMFC stack. ............................................................................ Figure.4.55. Variable temperature and voltage based on different current during stack fuel cell. ... Figure.4.56. Schematic of position of the sensor of voltage and temperature in PEMFC stack. ....Figure.4.57. Experimental results of disparate of temperature in different position of the PEMFC. ..................................................................................................................................................... Chapter V: Figure. 5 . 1 . 51 Figure.5.1. Illustration of the "Level 0" of the vehicle diagnosis. ..................................................Figure.5.2. Illustration of the "Level 1" of the vehicle diagnosis. ..................................................Figure.5.3. Illustration of the "Level 3" of the vehicle diagnosis. .................................................. Figure.5.4. Implementation algorithm fault diagnosis of PEMFC in power train. ......................... Figure.5.5. Fault isolation on drying, flooding, too dry and too flood in two cell .......................... Figure.5.6. Regression plot for different faults in two cells ........................................................... Figure.5.7. Fault isolation for Drying faults according to nine nodes during in one cell. ............... Figure.5.8. Fault isolation for flooding faults according to nine nodes during in one cell. ............. Figure.5.9. Fault isolation for too flood faults according to nine nodes during in one cell. ............ Figure.5.10. Fault isolation for too dry faults according to nine nodes during in one cell. ............. Figure.5.11. Regression plot for flooding faults according to nine nodes during in one cell. ......... Figure.5.12. Regression plot for too flooding faults according to nine nodes during in one cell. ... Figure.5.13. Regression plot for too drying faults according to nine nodes during in one cell. ...... Figure.5.14. Regression plot for drying faults according to nine nodes during in one cell.............. Figure. 1 . 1 .Figure. 1 . 2 . 1112 Figure.1.1. Energy power source from 1949-2011 Figure Figure.1.3. Daimler fuel cell electrical vehicle. Figure Figure.1.4. Ford fuel cell electrical vehicle. 2.1.3. General Motors: General Motors has the longest fuel cell history of any automaker, with the Electro Van was demonstrating the potential for fuel cell technology nearly 50 years ago. The company has had a succession of fuel cell test and demonstration vehicles, including the world's first publicly drivable FCEV in 1998. 2007 saw the launch of the HydroGen4 (marketed in the USA as the Chevrolet Equinox, see Figure.1.5), representing the fourth generation of GM's stack technology. More than 120 test vehicles have been deployed since 2007 under Project Driveway, which put the vehicles into the hands of customers and has been the world's largest FCEV end user acceptance demonstration: the vehicles have accumulated more than two million miles on the road [1.4]. Figure. 1 . 5 . 15 Figure.1.5. GM fuel cell electrical vehicle. Figure Figure.1.6.Honda fuel cell electrical vehicle. Figure. 1 1 Figure.1.7.Honda fuel cell electrical vehicle. Figure Figure.1.8. Honda fuel cell electrical vehicle. Figure. 1 . 9 . 19 Figure.1.9. Honda fuel cell electrical vehicle. ECCE project is developed in cooperation with the FEMTO-ST laboratory of the University of Franche-Comté and two industrial partners, HELION and PANHARD General Defense. The Electrical Chain Components Evaluation vehicle (ECCE) (See Figure.1.10) is a research project supported by the French Army General Direction (DGA), (Direction Générale de l'Armement). The ECCE vehicle, which was driven for the first time in 2003, is presented in Figure.1.10. Figure. 1 . 1 Figure.1.10. ECCE test bed Figure. 1 . 1 Figure.1.11. F-City H2 test bed. The project develops and tests under real conditions two fleets of five vehicles for postal mail delivery. Consortium partner La Poste will run the field tests in close coordination with other project partners involved. Partners Institute: University of Technology Belfort-Montbéliard, EIFER and companies: LA POSTE, MA HY TEC, MES, H 2 NITIDOR DUCATI energia and Steinbeis-Europa-Zentrum (SEZ). The hydrogen part of the drive train of Mobypost vehicle has been mounted at UTBM (Belfort) while the vehicle in its full electrical version has been built by Ducati Energia (see Figure.1.12)[1.7]. Figure. 1 . 13 . 113 Figure.1.13. Passive cascaded battery/UC system. Figure. 1 . 14 . 3 . 3 . 11433 Figure.1.14. Active cascaded battery/UC system.3.3. Parallel active battery/UC systemA parallel active battery/UC system, are shown in Figure.1.15 have been analyzed by researchers in the Energy Harvesting and Renewable Energy Laboratory (EHREL), Illinois Institute of Technology (IIT), and Solero at the University of Rome. The battery pack and the UC bank are connected to the dc link in parallel and interfaced by bidirectional converters. In this topology, both the battery and the UC present a lower voltage level than the dc-link voltage. The voltages of the battery and the UC will be leveled up when the drive train demands power and stepped down for recharging conditions. Power flow directions in/out of the battery and the UC can separately be controlled, allowing flexibility for power management. However, if two dc/dc converters can be integrated, the cost, size, and complexity of control can be reduced[1.9]. Figure. 1 . 1 Figure.1.15. Parallel active battery/UC system. Figure. 1 . 16 . 116 Figure.1.16. Multiple-input battery/UC system. Figure. 1 . 17 . 4 . 1174 Figure.1.17. Multiple-input battery/UC systems. 4. Components of the drive trains of FCEV: Fuel cell electrical vehicle structure is such as series-type hybrid vehicles. The fuel cell is the main energy sources which produce electricity, fuel cells in vehicles create electricity to power by supplying the engine, Battery, DC/DC converter and DC/AC inverter. (See in Figure.1.18) [1.10]. Figure. 1 . 18 . 118 Figure.1.18. Fuel Cell Electrical vehicle. Figure.1.19. (a) The electrolysis of water. The water is separated into hydrogen and oxygen by the passage of an electric current. (b) A small current flows. Figure.1.20 summarized different fuel cell and compared all of them with characteristics such as operating temperature, electrolyte charge carrier, and electrochemical reactions. In this figure illustrates the relative placement of the different type of fuel cell technologies with regard to electric demand (kW). The residential market considered is from 1 kW to 10 kW and related to PEM and SOFC. Commercial market range is 25 kW to 500 kW for examples of the commercial market segment include hotels, schools, small to medium sized hospitals, office buildings, and shopping centers. MCFCs and SOFCs are the only types of fuel cell that applied in both distributed power (3 MW to 100 MW) and industrial applications (1 MW to 25 MW) [1.10]. Figure. 1 . 1 Figure.1.20. Market for Fuel Cell Technologies. Figure. 1 . 1 Figure.1.21. Single cell structure of PEMFC. Figure. 1 . 1 Figure.1.22. Membrane electrode assembly. Figure. 1 . 1 Figure.1.23. Gas diffusion layers. Figure. 1 . 24 . 124 Figure.1.24. Bipolar Plate. 1.25 the different sub-systems presented in in this Figure are defined below: Figure. 1 . 25 . 125 Figure.1.25. System of the fuel cell [1.14]. Figure1. 26 . 26 Figure1.26. DOE has reduced the cost of automotive fuel cells from $106/kW in 2006 to $55/kW in 2013 and is targeting a cost of $30/kW [1.17] 1 . 1 . 11 Accumulation dates methods .................................................................................................... 5.2. Membrane resistance measurement methods ......................................................................... 5.3. Pressure drop method ................................................................................................................ Water management in PEMFC ................................................................................................. 6.2. Effect of operation condition in water management (flooding and drying) ...................... 6.2.1. Humidity ................................................................................................................................... 6.3. Thermal management on PEM FC .......................................................................................... 6.4. Degradation of electrode/electro catalyst ............................................................................... 2.2. (a). Simplified schematic of chemical reaction of the FC were shown inFigure.2.2.(b). In this anode and cathode parts which R ionic represents the ionic resistance of the membrane, R CT,A and R CT,C represent the charge transfer loss across the electrode-electrolyte interface at the cathode and anode side respectively. The capacitor C DL,A and C Dl,C represent the double layer effect at the cathode and anode side respectively. The Randles's FC models were depicted in Figure.2.2.(c) , R w and C w model the diffusion/mass transport losses are shown in this Figure. The modification in cathode side to taking into account diffusion impedance illustrated in Figure.2.2. (d) and in Figure.2.2.(e). In general electrochemical cell have been represented by classic transmission line model of porous electrodes that is shown in Figure.2.2. (f). Figure. 2 . 1 . 21 Figure.2.1. Dynamic electrical circuit model of the PEMFC. Figure. 2 . 2 . 22 Figure.2.2. Equivalent electrical circuit model of PEM FC. Eq2. 4 n 4 = number of electrons per molecule of H 2 = 2 electrons per molecule. N Avg = number of molecules per mole (Avogadro's number) = 6.022 10 23 molecules/Mol. q ei = charge of 1 electron = 1.602 10 -19 Coulombs/electron. The product of Avogadro's number and charge of 1 electron is known as Faraday's constant: F = 96,485 Coulombs/electron-Mol. Electrical work is therefore: Eq2.5 Figure. 2 . 3 . 23 Figure.2.3. Cell potential loss at different temperature. Based on equation above by increasing of pressure, cell potential raised too(Figure.2.4). Figure. 2 . 4 . 24 Figure.2.4. Cell potential losses at different pressure. Figure.2.6 shows the experimental voltage plotted as a function of current for a cell fuel cell PEM at a temperature of 23 ° C. The polarization curve shows the cell voltage of a fuel cell according drop of the output current. Even though, a fuel cell does not have load (open circuit) the theoretical potential voltage is less than one volt. Because there are, some unavoidable loss is generated in the fuel cell: 1) Activation losses. 2) Internal and ionic resistance. 3) Concentration losses. 4) Internal current. 5) Crossover of reactants. 6) Activation losses. Figure. 2 . 6 . 26 Figure.2.6. Polarization curve for a cell of a PEM fuel cell. Figure. 2 2 Figure.2.7. The charge double layer at the surface of a fuel cell. The Fuel cell equivalent circuit model consists of: open circuit voltage (OCV, E nernst ), ohmic losses (R ohm ), activation losses (R act ), consideration and double layer effect capacitance (C dl ). The delay of different current in FC based on the effect of double layer capacitance in both cathode and anode side. However, in cathode side it is more important than the anode side. The ohmic losses are not affected by this capacitance. In the Figure.2.7, the capacitance is placed in parallel with activation and consideration resistance, and the cause of voltage drop has a dynamic effect in the FC. The dynamic equation of FC voltage is [2.23]: Eq2.50 Figure. 2 2 Figure.2.8. Polarization Curve with different losses. Figure. 2 2 Figure.2.9. Block diagram of the multi-physical modeling of FC. Figure. 2 . 2 Figure.2.10. Activation losses at different temperature. Figure. 2 . 2 Figure.2.11. Resistive loss in FC at different temperature. Figure. 2 . 12 . 212 Figure.2.12. Concentration losses in FC at different temperature. Figure.2.14 shows the effects of increasing temperature between 25°C and 55°C with different polarization curves. It is obviously that the voltage increased by increasing the temperatures. It should be noted that rising of the internal temperature of the FC reduces performances and has irreversible damage on the FC[2.18]. Figure. 2 . 2 Figure.2.13. Cell Voltage losses at different temperature. 2.14. allows pointing out the effect of different pressure on the activation voltage losses. Figure. 2 . 14 . 214 Figure.2.14. Activation losses in FC at different pressure. Figure. 2 . 2 Figure.2.15. Concentration losses in FC according to changing pressures. Figure. 2 . 16 . 216 Figure.2.16. Voltage losses in FC at different pressure. 2.17 according to different relative humidity. In this Figure shows how with increase in the relative humidity decrease in ohmic resistance is caused. Because of the conductivity of the membrane is closely linked to the RH. Figure. 2 . 17 . 217 Figure.2.17. Resistive losses at different Humidity. Figure. 2 . 2 Figure.2.18. Cell Voltage losses at different Humidity. ): does not effect to change these losses I = Fuel cell current in amperes, T = Temperature of the fuel cell stack in Kelvin, = The partial pressure of H 2 in atm, = The partial pressure of O 2 in atm, = Concentration of hydrogen, = Concentration of oxygen. = Relative humidity of hydrogen or air. [2.28]. Figure 2 . 19 . 219 Figure 2.19. Fault action depend on the system. Figure. 2 . 2 Figure.2.20. Scheme for fault-Tolerance strategies. Figure. 2 . 2 Figure.2.21. Model base fault diagnosis diagram. In contrast, non-model based diagnosis is the fault detection and isolation according to human knowledge or qualitative reasoning techniques based on input and output data. Three categories of non-model based diagnosis methods: • The artificial intelligence (Neural Network, Fuzzy Logic and Neural-Fuzzy method), • Statistical (Principle Component Analysis, Fisher Discriminant Analysis, Kernel PCA and Kernel FDA) method, • Signal processing method (Fast Fourier Transform, Short Time Fourier Transform and Wavelet Transformer). Figure. 2 . 2 Figure.2.22. Schematic representation of EIS applied to fuel cell characterization. Figure. 2 . 2 Figure.2.23. Circuits model according to EIS [2.43]. Figure. 2 . 24 . 224 Figure.2.24. Bode plot of the impedance spectra simulated in the frequency range from 10 MHz to 10 kilohertz. 2.25) [2.40]. Figure. 2 . 25 . 225 Figure.2.25. Original arrange for HFR and EIS measurement techniques. Figure. 2 . 26 . 226 Figure.2.26. Ac resistance measurement diagram with combination load parallel with mille-ohm meter. Many Different phenomena involved to operate of fuel cell. Some of these phenomena's are the common source fault in FC: specifically, improper water management (flooding, Drying)[2.42], catalyst degradation and fuels starving, membrane electrode assembly (MEA) contamination[2.43]. These faults cause for voltage drop and reduce the lifetime of a fuel cell. The typical fault classification method can be described in Figure.2.27. This Figure shows a simplified scheme for process fault classification with several levels of information processing. The lower level contains the processing data indeed, data of systematically collected by the sensors. Faults extracting from healthy mode and faulty mode can be attached to a medium level. Fault classification is located in the high level in order to distinction different faults in the system[2.36]. Figure. 2 . 2 Figure.2.27. Faults classification process in PEM FC. Figure. 2 . 2 Figure.2.28. Overview of the wide range of dynamic processes in FC [2.44]. Nadia 2008 , 2008 increase temperature leads to increase in saturation pressure and causes evaporation. As a Matter of fact, reduce in flooding will be happened when liquid water diminished. He et al. Investigated of while another operating condition in which (air flow, cell voltage) are constant with increasing temperature from 40 °C to 50 °C causing improvement flooding in the cell[2.40]. Figure. 2 . 29 . 229 Figure.2.29. Multilayer feed forward neural network. 1 . 2 . 12 Description of the modelled FC Cell ......................................................................... 2.2. Description of the 3D model applied on one cell ..Calibration of 3D model in healthy mode .................................................................. 2.10. Network circuit analysis .......................................................................................... 3. The 3D model applied to one stack ................................................................................... 3.1. Considerations on the 3D model calibration ............................................................ 3.2. Calibration of the 3D model of FC Stack (Two cells) Flooding at anode side ............................................................................................ 4.3. Drying in membrane ............................................................................................... 4.4. Simulation of faulty modes examples ....................................................................... 5. Conclusion ....................................................................................................................... 5 ) 5 Biphasic effect of liquid and vapor of water 6) Water condensation/evaporation 7) Gas diffusion in the diffusion layer 8) Diffusion layer flooding 9) Microscopic gas diffusion in catalyst layer 10) Non uniform water distribution in the membrane 11) Water transport in the membrane 12) Dynamic water content variation in the membrane Thermal domain 1) Non-isothermal temperature distribution 2) Dynamic temperature variation 3) Conduction between solid materials 4) Forced convection in the channel 5) Heat flux due to convective mass transport 6) Natural convection on external surface 7) Latent heat due to water phase change Figure. 3 . 1 . 31 Figure.3.1. Single cell MES PEMFC with different layers. Figure. 3 . 2 . 32 Figure.3.2. Single cell PEMFC based on elementary cell in three dimensions (3D). 4 ) 4 Thermal mode and distributions temperature are included in it. 5) Voltage distributions are recorded based on experimental test. 6) This model is able to induce inhomogeneous distribution of physical parameters. 7) Possibility of fault characterization. Before being used for characterizing the FC cell faults, the 3D needs to be calibrated. The so called calibration consists in computing all the physical components of the circuit shown in Figure 3.3. It is performed starting from voltage and temperature measurements by the mean of Newton-Raphson method. The operations of calibration as well as the fault characterization are summarized through the algorithm shown in Figure.3.3. Figure. 3 . 3 . 33 Figure.3.3. Algorithm of calibration and using of the 3D model. 3 ) 3 Voltage sensors and thermocouples installed in 9 nodes of each cell (N 1 -N 9 ). 4) Only magnitudes of impedances are considered. Thus the FC fault can be characterized only through the magnetite of voltage and current density.Voltage cell has not the same values at different points of cell, because of the following reasons:1) Non-uniform fuel/air flow distribution to individual the cells,2) Non-uniform temperature, 3) Current distribution, 4) Uniformities of the material (compositions and microstructure)[3.8]. Figure. 3 . 3 Figure.3.4 shows the top view of the basic electrical model of one FC cell including the cathode, the anode and the membrane sides. R ohm represents the resistance of the membrane, R Act (a) and R Act (C) represent the activation losses on anode and cathode sides respectively. R Con (a) and R con (C) represent the concentration losses on anode and cathode sides respectively. The capacitors C (a) and C(C) represent the double layer capacitor present at the anode and cathode. However, as mentioned before in modeling hypotheses activation in anode side compare to cathode sides negligible. R con is assumed as the sum of the concentration on the anode and cathode sides (R con =R con (a) +R con (C)). The Capacitor C is equivalent of the two capacitors on the anode and cathode sides. Figure. 3 . 4 . 34 Figure.3.4. Top view of 3D electric model of one cell of a PEMFC. Figure. 3 . 6 . 36 Figure.3.6. View of the interface resistors (z axis) between two FC cells. Figure. 3 . 7 . 37 Figure.3.7. Stack temperature according to load current in MES fuel cell. Figure. 3 . 8 . 38 Figure.3.8. Simulation result of the polarization curve in different double layer effect for MES PEMFC. Figure. 3 . 9 . 39 Figure.3.9. Simulation models of sensitive mode. As this model is to be used for accurate diagnosis fault in PEMFC we suggest dividing each cell of the FC stack in different elementary cells. The temperature can be taken into account by adopting a different equivalent circuit for each elementary cell. The magnitude of the decrease in voltage, called the voltage variance, is associated with changes in fuel cell model parameters that include opencircuit voltage, types of losses in anode side (R a ) losses in cathode (R c ), double layer capacitance (C dl ) in anode and cathode and membrane losses (R o ). Figure. 3 . 3 Figure.3.10. Polarization curve of MES fuel cell. Figure.3.11. The electric circuit model for 3D of PEMFC. proposed 3D fault sensitive model considers distributions of temperature and voltage in X, Y and Z directions. Three illustrative views of this model are shownFigure.3.12, Figure.3.13 and Figure.3.14. Figure. 3 . 12 . 312 Figure.3.12. Front view of the 3D proposed model for PEMFC stack. 3.12) is to determine all impedances in each individual cell. Then the connection resistors between cells are calculated by known current density distributions. A set of equations can be established with the form of [Y]. [V] = [I] for the two cells. Figure. 3 . 3 Figure.3.15. Simplified model in three dimensions (3D) for two cells. Figure. 3 . 16 . 316 Figure.3.16. Relative humidity according to the stack temperature of the exit air of the FC with air stoichiometry of 2 Figure. 3 . 17 . 317 Figure.3.17. Fault diagram according to operating conditions of FC. Figure. 3 . 3 Figure.3.18. MVV and HDR variations according to resistance changes in Z direction. Figure. 3 . 3 be where gives some examples of this characterization process that variations in impedances of different branches of the 9 zones of the circuit model are assumed. The X, Y and cross sections direction have been considered in these simulations. The significant point shown in these Figures is that the voltage characteristic at the output of the cell is affected by changing the impedance value. In this example resistance in the X, Y and cross section direction increased to simulate of drying fault. In other world, with increasing the impedance in one of the direction caused to distributions current density changed in all of the FC and it caused to drying or flooding faults occurring in the FC. Figure. 3 . 3 Figure.3.19. MVV and HDR variations according to resistance changes in X direction. Figure. 3 . 3 Figure.3.20. MVV and HDR variations according to resistance changes in XY direction. Figure. 3 . 3 Figure.3.21. MVV and HDR variations according to resistance changes in Y direction. Chapter IV 1 . 2 . 12 Introduction ........................................................................................................................ Single cells set-up ............................................................................................................... 2.1. Gas supply description............................................................................................. 2.2. The physical references of the MEAs ....................................................................... 2.3. Description of the test bench .................................................................................. 2.4. Single PEMFC cell ready for tests ........................................................................... 2.5. Voltage sensors choosing for measurements ............................................................ /min ( at the nominal power and pressure ) Figure. 4 . 1 . 41 Figure.4.1. Reactant air management with two parallel inlet pieces for oxygen feeding part. Figure. 4 . 4 Figure.4.2 shows the present design STANSTEEL plate. The dimension of the plate is equal to 80×57×9 mm. Two holes are identical to those considered in the filled oxygen flow. The desired common material physical properties that can be used for the feeding oxygen sides are a high mechanical strength and excellent stability of water corrosion. Figure. 4 . 2 . 42 Figure.4.2. Oxygen plate feeding. ConnectorFigure. 4 . 3 . 43 Figure.4.3. Components of single PEM cell. Figure. 4 . 4 . 44 Figure.4.4. Structure of the test bench. 4.5) and developed in Labview ® environment. Figure. 4 . 5 . 45 Figure.4.5. Hardware of control. 4.6) allows checking of the various available measurements and sensors states. For instance, the displayed measurements are: single cell voltages, gas flows and pressures, and current. The leads indicate in which mode that means the system operates[4.2]. Figure. 4 4 Figure.4.6. Interface panel. 4.7), some fuel cell parameters are displayed, such parameters are (number of the cells, fuel cell stoichiometry anode and fuel cell stoichiometry cathode, active surface fault thresholds and safety on cell voltages, T ° max, delta pressure maximum). These parameters can change while the program is running, it is necessary to modify some factors, for example, the choice of stoichiometry factors, and set this parameter according to the data sheet of the manufactory. Some parameters should be kept unchanged in a given range to avoid irreversible damages. The voltage across each cell of the stack should have its values greater than a given threshold. A sufficient amount of gas should be provided to the bipolar plates, according to the load current[4.2]. Figure. 4 4 Figure.4.7. Panel for Settings" the type fuel cell for test. Figure. 4 . 8 . 48 Figure.4.8. Test bench structure. Figure. 4 2 . 4 . 424 Figure.4.9. Electronic load for simulate of driving cycle. . 4.11).Thermocouples are used in the present experimental work due to tolerable different range of the temperatures and availability in a low price in comparison with other types as indicated in Figure.4.12. Figure. 4 . 4 Figure.4.11. Thermocouple type K. Figure. 4 . 12 . 412 Figure.4.12. The relationship between the seek voltage and temperature. . 4.7 the errors are calculated by supposing that the reference thermocouple has acceptable values of temperature reference. The possible sources of errors include compensation, linearization, thermocouple wire, and experimental errors. As shown in Figure. 4 . 13 . 413 Figure.4.13. DPI 620 advance measurements devise. Figure. 4 . 14 . 414 Figure.4.14. Process of the calibration of thermocouples with reference thermocouple. Figure. 4 . 4 Figure.4.15. The Cannes Pyrometrques type 14-reference thermocouple. Figure. 4 . 4 Figure.4.16. indicates that the thermocouples recorded the temperature measurement at different temperatures and compared them with the reference thermocouple. As illustrated in this Figure, the errors between the different thermocouples are related linearly. Moreover, the error between the thermocouples and the reference thermocouple increases with the temperature of the BINDER incubator. This error can be reliable in the test analysis and measurement comparison in temperature distributions. Figure. 4 . 16 . 416 Figure.4.16. Comparing the reference and 12 thermocouples in different temperatures. 4.17). During the test, a voltage acquisition device from National Instrument measures two parameters fist, the individual cell voltages second, the thermocouples capture continuously in the individual cell temperatures (see Figure. 4.18). Figure. 4 . 17 . 417 Figure.4.17. Voltage sensors directly connect to the graphic block in cathode and anode side. Figure. 4 . 19 . 419 Figure.4.19. Set up measuring the temperature distribution in PEMFC (MES). Figure. 4 . 4 Figure.4.20. Boundary limitation for choice of acceptable sensors. Figure . Figure.4.21. Schematic of one cell with thermocouple and voltage sensors. Figure. 4 . 4 Figure.4.22. Test conditions on load current and oxygen stoichiometry. Figure. 4 . 4 Figure.4.23 shows the schematic diagram for measuring the voltage inside the cell. The oxidant gas is heated and humidified by passing from the boiler that designed in the test bench. In the present study, each temperature measurement is collected by the data acquisition with a sampling rate of one reading each 1 second. These measurements are investigated over the intervals where the temperature is constant after changing the experimental conditions. This procedure is repeated for different current loads and different stoichiometry air and hydrogen ratios. Figure. 4 . 4 Figure.4.23. Set up measuring the Voltage in PEMFC (MES). Figure. 4 . 24 . 424 Figure.4.24. Temperature distribution for the cathode side "O2 stoichiometry ratios of 3, 5 and 7; Current load 5 A, H2 stoichiometry of 1.5". Figure. 4 . 4 Figure.4.25 shows of this different temperature between anode and cathode sides. In this figure the temperature measurments are done at different for three different current loads of 5 A, 10 A and 15A. The anode and oxygen stoichiometry ratios are fixed at 3 to 6 while the hydrogen one is fixed at 2. The oxygen sides are humidified with a boiler. Here also, it is clearly seen that the temperature of the anode side is lower than the cathode side by more than 1°C. These Figures shows also that the local temperature between the anode and cathode are increased follow to increasing of the stoichiometry oxygen. This can be caused with increasing stoichiomtery, the temperature in cathode side will be decreased. Figure. 4 . 25 .Figure. 4 . 4254 Figure.4.25. Temperature measurement for various load current, O2 stoichiometry ratios of 3, 4, 5 and 6; H2 stoichiometry of 2". Figure. 4 . 26 . 426 Figure.4.26. Schematic of positions of thermocouple in cell. Figure. 4 . 4 Figure.4.27. Temperatures are measured for various loads current 5A, 10A and 15A, stoichiometry of 3 for O2 2 for H2. Figure. 4 . 4 Figure.4.28. Local temperature distributions along the cell at the cathode side, Opeation condition : stoichiometry of 3,4 and 5 for O2 2 for H2. Figure. 4 . 4 Figure.4.28 and Figure.4.29 shows the sensitive variation temperature in y axes and effect on the profile of voltaic cell. based on these figures the temperature in the middle and inlet are lower than the temperature at the outlet. This can be explained by that the increase of the membrane hydration and the oxygen rate at the outlet may increase the temperature of the inlet. Figure. 4 . 4 Figure.4.29 shows an example to verify the sensetivity of distributions of temperature in y axes. When the current load increases, the global voltage decreases however the voltage between inlet, middle and outlet decrease. Generally, the voltage decrease (or increase) is directly related to the temperature and current density in each region of the cell. Figure. 4 . 29 . 5 . 4295 Figure.4.29. Voltage distributions along the cell with operation condition : O2 stoichiometry of 3, 4 and 5 for H2 of 1.5. Figure 4 . 4 [START_REF] Hissel | A Review on Existing Modeling Methodologies for Pem Fuel Cell Systems[END_REF] shows the temperature reach its higher values at the middle point of the cell, with stoichiometry of oxygen of 3, stoichiometry hydrogen of 1.5 and a current of 15A. This figure indicates that the temperature are distributed randomly at the y axes, especially at high current load. Also, the temperature value at stoichiometry 1.5 is less than the temperature value at stoichiometry 2 for hydrogen(comparing with Figure.4.30). Figure. 4 . 4 Figure.4.30. Local temperature distributions along the two axes (x and y) at cathode side for various loads current: O2 stoichiometry of 3, 4 and 5 for H2 of 2. Figure. 4 . 31 . 3 . 1 . 3 . 431313 Figure.4.31. Local temperature distributions along the two axes (x and y) in PEMFC at cathode side for various loads current: O2 stoichiometry of 3, 4 and 5 for H2 of 1.5. Figure. 4 . 32 . 432 Figure.4.32. Temperature distribution over the cell in the x axes for various loads current: O2 stoichiometry of 3, 4 and 5 for H2 of 1.5. Figure. 4 . 33 . 433 Figure.4.33. Shows simply a scheme model of single cells that divided by different regions in the x and y-axes .To consider the distribution of temperate in x and y axes, temperature distribution in the single cell, cell divided into three parts in y axes (Middle, left and right sides) and three parts in x axes (inlet, middle and outlet). The temperature measurements are obtained by mean of 12 thermocouples on the y axis of the cell in the middle, left and right sides of the cell (x axes). Figure. 4 . 33 . 433 Figure.4.33. Schematic of the PEM fuel cell in x and y axes in different region "inlet, middle and outlet ". Figure. 4 . 4 Figure.4.34 and 4.35 show the effect of temperature on the x and y axis in the cell with different O 2 stoichiometry ratios of 3, 4 and 5 on different current load 5 A, 10 A and 15 A. The mean values of the temperature measurements are recorded at the fourth nearest thermocouple at the inlet, middle and outlet of each region as follows: left, middle and right side of the cell. By calculation of four mean values, three points are achieved for each region. The x and y-axes of the cell represent the effect of temperature for different current loads of 5A, 10 A and 15 A (see Figure.4.34) and stoichiometry oxygen values between 3 and 5 with fixed stoichiometry hydrogen of 1.5. Figure.4.34 and 4.35 show the effect of temperature on the x and y axis in the cell with different O 2 stoichiometry ratios of 3, 4 and 5 on different current load 5 A, 10 A and 15 A. The mean values of the temperature measurements are recorded at the fourth nearest thermocouple at the inlet, middle and outlet of each region as follows: left, middle and right side of the cell. By calculation of four mean values, three points are achieved for each region. The x and y-axes of the cell represent the effect of temperature for different current loads of 5A, 10 A and 15 A (see Figure.4.34) and stoichiometry oxygen values between 3 and 5 with fixed stoichiometry hydrogen of 1.5. Figure. 4 . 34 . 434 Figure.4.34. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O2 stoichiometry of 3, 4 and 5 for H2 of 1.5. . 4.35). Figure. 4 . 35 . 435 Figure.4.35. Temperature distribution over the cell with different current load 5 (A), 10 (A) and 15 (A). O2 stoichiometry of 3, 4 and 5 for H2 of 2. Figure. 4 . 4 Figure.4.36 shows the results summary of the temperature distributions. It indicates that the temperature at the middle of the cell has the highest values (in x axes). Nevertheless, the highest values of the temperature in y axis are located in the outlet. This result can be explained by the heat transfer convection and air heat transfer passing through the oxygen channel, as explained before. Figure. 4 . 4 Figure.4.36. Temperature distributions y axes. Voltage Measurements and temperature distributions are carried out in order to determine the relationship between the voltage and the temperature distribution inside the cell. The relation of the voltage and temperature in single cell Figure.4.37 shows that, from the cathode side, the voltage has its highest values near the left of the outlet. Then, they decrease gradually and they get lowest values at the right corner and the middle of the cell. This is in total agreement with what was explained before concerning the temperature distribution (Figure.4.35).Consequently, it can be resulted that the voltage cell in the outlet is greater than the inlet voltage of the cell. In fact, Figure.4.37 shows that, when the stoichiometry is getting higher, the voltage at the outlet has the highest voltage values compared with other regions. The voltage varies between 5 and 10 mV at temperature become stabilized in the inlet, center and outlet of the middle of the cell. Finally, at the left and right edges of the cell, the voltage reaches the value of 0.2 V. Figure. 4 . 37 . 437 Figure.4.37. Voltage distribution over the cell for different current load 5 (A), 10 (A) and 15 (A). O2 stoichiometry of 3, 4 and 5 for H2 of 1.5. Figure. 4 . 38 . 438 Figure.4.38. Validation simulation and experimental test for one cell. Figure. 4 . 4 Figure.4.39 shows the experimental for measuring the distribution of the local temperatures in two cells. The temperature increase over the y axes of each cell from 27 °C to 45 °C because of the cells current change from 5 to 15. Examining the temperature distributions in z axes it can be seen that the temperature in cell two has a quite higher value comparing to that of cell one ; 1°C to 2°C of difference is observed. Figure. 4 . 4 Figure.4.40. Voltage measurements in cathode side of PEMFC in with O2 stoichiometry of 3, 4 and 5 for H2 of 2. Figure. 4 . 4 Figure.4.41. Validation simulation and experimental test for two cell. Figure. 4 . 4 Figure.4.42. The MES FC system used for validating tests. Figure. 4 . 4 Figure.4.43. Test bench of fuel cell in the chamber room.In order to improve the accuracy of stack temperature measurements of the fuel cell, three thermocouples have been used in this test. Their positions have been selected at critical points of the fuel cell as depicted in Figure.4.44 (the inlet of hydrogen, outlet of oxygen and middle of the near reaction air inlet channel). Figure. 4 . 4 Figure.4.44. Position of the thermal sensors. Figure. 4 . 4 Figure.4.45. Current dynamic profile of FC under test. Figure. 4 . 4 Figure.4.46. Comparison thermal equation between analytical equation and three points in experimental test. Figure. 4 .Figure. 4 . 44 Figure.4.47. The cell voltage versus current for MES PEMFC. Figure. 4 . 4 Figure.4.48. The comparison voltage between experimental test and simulation according to load dynamic profile. Figure. 4 . 4 Figure.4.51 compares the experimental results both for the electrical and thermal domain. As it can be observed, the multi-physical model gives results are in a good agreement with experimental ones despite the some errors (voltage pics) due to the periodic purges of the FC which are not taken into account in the model. Indeed, in order to eliminate the water and impurities on the hydrogen side (anode) during the operation, the H 2 purging valve is opened periodically. The H 2 purge function is visible in Figure.4.51 with the drop of FC stack voltage. These results allow validating the proposed model in normal (healthy) operating mode. As illustrated in Figure.4.52 the stack voltage of fuel cells have been improved by increasing outside temperature. This effect has been observed by locating the fuel cell in a climatic chamber as shown in Figure.4.49 and measuring the polarization curves. Figure. 4 . 4 Figure.4.49. Test bench of experimental tests on the FC: (a) laboratory, (b) climatic chamber. FigureFigure. 4 . 4 Figure.4.50. Load profile. Figure. 4 . 52 . 452 Figure.4.52. Comparison of polarization curves for different temperatures. Figure. 4 . 54 . 454 Figure.4.54. Thermal impact on PEMFC stack. Figure. 4 . 55 . 455 Figure.4.55. Variable temperature and voltage based on different current during stack fuel cell. Figure. 4 . 56 . 456 Figure.4.56. Schematic of position of the sensor of voltage and temperature in PEMFC stack. Figure. 4 . 4 Figure.4.57. Experimental results of disparate of temperature in different position of the PEMFC. 1 . 1 Fast Fourier Transform (FFT)................................................................................. 2.2. Modelling method for on-line FC diagnosis ............................................................. 2.3. ANN Based 3D Fault Classification in the PEMFC single cell ................................... 3. ANN Faults classification in stack of 2 cells ..................................................................... 3.1. Faults Classification in stack .................................................................................... 3.2. ANN Based Fault classifications of Drying, Flooding in the one cell .. This level of diagnosis represents the vehicle. It includes all the external systems without diagnosis that can immobilize the vehicle. However, the simplicity of these systems from a technical point of view makes these faults easily detectable by a user of the vehicle. Furthermore, automatic supervision is not necessary at this level of diagnosis. It can be done by a visual inspection only (see Figure.5.1))[5.3]. Figure. 5 . 1 . 51 Figure.5.1. Illustration of the "Level 0" of the vehicle diagnosis. Figure. 5 . 2 . 52 Figure.5.2. Illustration of the "Level 1" of the vehicle diagnosis. The level 2 2 of the diagnosis deals with the faulty main components of the vehicle indicated in the level 1. The subsystem considered in the present work is the PEMFC. The aim of the level 2 is to 5.3). Figure. 5 . 3 . 53 Figure.5.3. Illustration of the "Level 3" of the vehicle diagnosis. Figure. 5 . 5 Figure.5.4 shows the synopsis representation of the followed method for on-line diagnosis modeling of the PEMFC. Based on aim of this work to detect of fault, FFT analysis has been used to fault characterization through proper patterns that are used for training the ANN model for on line diagnosis. In the next section, a comprehensive explanation about the structure of ANN is presented as well as detailed explanation of how fault diagnosis in each of faults mentioned previously. Figure. 5 . 4 . 54 Figure.5.4. Implementation algorithm fault diagnosis of PEMFC in power train. current density based on Newton Raphson in different faults are shown in In these tables, all effects of all faults in each node are shown in red rectangle boxes. It can be observed that the local current density has the highest values in the too flood fault and decreases following the too dry faults. Flooding faults cause current density increase because of the ohmic resistance decrease suddenly. However, Drying make that current density reduce, because of the ohmic resistance increase. It is seen that the current density distributions in low current (5A) are homogenous patents. These patents have been changed with increasing the load current profile and current density become uneven distributed. The significant point in these figures is that the local current density decreases very slowly from the inlet to the outlet. However, these variations have different characteristics according to each fault. For instance, as explained before current density distributions of each node have been changed according to the variation of the operating conditions. The local current density values increased noticeably along the flooding faults in each node. Hence, intervals impedances of cell in different directions have been modified by the new current density distribution. Consequently, fault isolation based on operating conditions and current density distributions can be feasible in the 3D sensitive model. Figure. 5 . 5 - 55 Figure.5.5-Figure.5.6 show faults detection in cells based on Table.5.2 In this figure the confusion matrix shows the percentage of correct and incorrect classifications. Correct classifications are green squares on the matrix diagonal. The input of the 7 harmonic attributes will be used as inputs to the neural network and the respective target for each will be 8 classes. The results indicate the success of ANN classification in all 8 in two cells of Flooding, Too Flood, drying and Too Drying faults. Figure. 5 . 5 . 55 Figure.5.5. Fault isolation on drying, flooding, too dry and too flood in two cell For this work, the training data indicate a good fit. The validation and test results also show R values that greater than 0.99. Figure. 5 5 Figure.5.6. Regression plot for different faults in two cells Figure. 5 5 Figure.5.7-Figure.5.10 shows the ANN training efforts of 9 nodes in different faulty mode. The results indicate the success of ANN classification in all 9 different nodes of Flooding and drying faults. It is noted that each time a neural network is trained, can result in a different solution due to different initial weight and bias values and different divisions of the data into training, validation, and test sets. As a result, different neural networks trained on the same problem can give different outputs for the same input. To ensure that a neural network of good accuracy has been found a retrain several times. Because of this, after retrain more than 10 times the better results are recorded and are shown in Figure.5.7-Figure.5.10. Figure. 5 5 Figure.5.7. Fault isolation for Drying faults according to nine nodes during in one cell. Figure. 5 . 9 . 59 Figure.5.9. Fault isolation for too flood faults according to nine nodes during in one cell. Figure. 5 . 5 shows the validating the network is to create a regression plot, which shows the relationship between the outputs of the network and the targets. If the training were perfect, the network outputs and the targets would be exactly equal, but the relationship is rarely perfect in practice. In this case we can create a regression plot with the following commands. The first commands calculate the trained network response to 70% of the inputs in the data set. And 15 % of the data set of validation and test network. The result is shown in the following figures. The three axes represent the training, validation and testing data. The dashed line in each axis represents the perfect result -outputs = targets (Classification in 9 classes). The solid line represents the best-fit linear regression line between outputs and targets. The R-value is an indication of the relationship between the outputs and targets. If R=1, this indicates that there is an exact linear relationship between outputs and targets. If R is close to zero, then there is no linear relationship between outputs and targets. For this work, the training data indicate a good fit. The validation and test results also show R values that greater than 0.9. The response is acceptable to implement of trained ANN in experimental test results. Figure. 5 .Figure. 5 . 12 . 5512 Figure.5.11. Regression plot for flooding faults according to nine nodes during in one cell. Figure. 5 . 13 . 513 Figure.5.13. Regression plot for too drying faults according to nine nodes during in one cell. Figure. 5 . 14 . 514 Figure.5.14. Regression plot for drying faults according to nine nodes during in one cell. Figure.3A.1. Diagram of building a 3D model of PEMFC in SIMULINK. Figure.3A. 2 . 4 ) 24 Figure.3A.2. Diagram of building a dynamic model of PEMFC. In this Figure each elementary are depend on the temperature, pressure and humidity. Calculation activation, concentration and ohmic loss with reversible voltage (Nernst voltage) based on these parameters and current densities for finding the voltages are necessary. The Matlab Simulink for one elementary cell is given in Figure.3A.3 in this figure the voltage output calculated by 1) Calculation the pressure drop in channel (as shown in the Figure.3A.3 at subsystem 1). 2) Thermal domains as explained in chapter III only depend on the current of FC (as shown as shown in the Figure.3A.3 at subsystem 2). 3) Dynamic model is presented the double layer effect (shown in the Figure.3A.3 at subsystem 3) 4) Electrochemical reactions depend on the reversible voltage and losses (Activation, concentration and ohmic) as shown in shown in the Figure.3A.3 at subsystem 4. ) A are shown in Figure.4A.1 and Figure.4A.2, respectively. Figure. 4 .A. 1 .Figure. 4 .. 2 . 4142 Figure.4.A.1. Temperature distribution for the cathode side "O2 stoichiometry ratios of 3, 5 and 7 ; Current load 10 A, H2 stoichiometry of 1.5" . Figure. 4 .A. 3 . 43 Figure.4.A.3. Temperature is measured for various load current in anode side. "O2 stoichiometry ratios of 3, 5 and 7 ; Current load 15 A, H2 stoichiometry of 1.5 and 2". Figure. 4 .A. 4 . 44 Figure.4.A.4. Temperature is measured for various load current in cathode side. "O2 stoichiometry ratios of 3, 5 and 7; Current load 15 A, H2 stoichiometry of 1.5 and 2". 3 . 3 Figure.4.A.5. and Figure.4.A.6 shows the temperature distributions at different operating conditions and different current loads with stoichiometry 3, 4 and 5 for oxygen and 2 for hydrogen. Figure. 4 .A. 5 . 45 Figure.4.A.5. Temperatures are measured for various loads current 5A, 10A and 15A. "O2 stoichiometry ratios of 4; Current load 5, 10 and 15 A, H2 stoichiometry 2". Figure. 4 . 4 Figure.4.A.6. Temperatures are measured for various loads current 5A, 10A and 15A. "O2 stoichiometry ratios of 5; Current load 5, 10 and 15 A, H2 stoichiometry of 2". Figure. 4 . 4 Figure.4.A.7 shows the sensitive variation temperature in y axes and effect on the profile of voltaic cell. In this figure shows the temperature profile along the channel, from the cathode side of the PEMFC at different current loads of 5 A, 10 A and 15 A. based on these figures the temperature in the middle and inlet are lower than the temperature at the outlet. This can be explained by that the increase of the membrane hydration a and the oxygen rate at the outlet may increase the temperature of the inlet. It can be noted in these figures a sudden variation of the temperature (rapid decrease and increase) occurs at 5 th stoichiometry for the current of 5 A and 10 A. The possible reason for this rapid variation is the water injection outside the cell. Other notification in these figures at current 15A, the temperature at the inlet increases sharply and the temperature at the middle increases gradually. The possible reason it can be drying in the inlet of a single cell. Consequently, with increasing the temperature the cell voltage drops accordingly (as shown inFigure.4.A.8). Figure. 4 . 4 Figure.4.A.7. Local temperature distributions along the cell on the cathode side, Opeation condition : "O2 stoichiometry ratios of 3,4 and 5 Current load 5,10 and 15 A, H2 stoichiometry of 1.5". Figure. 4 . 4 Figure.4.A.8. Local temperature distributions along the cell on the cathode side, Opeation condition : "O2 stoichiometry ratios of 3,4 and 5 Current load 5,10 and 15 A, H2 stoichiometry of 2". can be a weighted summer and is denoted by  j in Figure.5A.1, on the output of which the net value equation.5A.1 is presented: Figure Figure.5A.1. Basic static artificial neuron (ith neuron). by b i if function f i is plotted versus the so called 'net' input to the weight, which is mathematically the argument of the activation function, is the same of the weighted inputs and the bias. However, it is also possible to use a model with n inputs together with the bias. In this case Figure.5A. 3 . 3 Figure.5A.3. Multilayer feed forward neural network.  1 . 1 Where the elements are the activation functions of each of the k nodes, which have been assumed to be equal, f 11 , = f 12 = … = f 1k = f 1 , S 1 is the net vector , S 1 = [S 1 , S 2 , …, S k ] T which contains the net inputs S 1 , S 2 , …, S k to neurons 1, 2, …, K, Furthermore, W 1 is the weight matrix of the output layer, which due to the specified architecture must contain k rows and n columns: ij w is the weight from the destination (recipient) node j to source node i , where i = 1, 2, …, k; j = 1, 2, …, ,n and B 1 is the bias vector of the single layer, B 1 = [b 11 , b 12 , …, b 1k ]T Figure.5A. 4 . 4 Figure.5A.4. Schematic of a three-layer feed-forward ANN. Figure.1.1. Energy power source from 1949-2011 ........................................................................................ Figure.1.2. Total consumption by sector, 2011. ............................................................................................ List of Figures Chapter I: ). Ԑ Electrical permittivity. δ Catalyst layer thickness. K o Rate coefficient. Subscripts and superscripts Anode. Cathode. Carbon dioxide. Electron. H + Proton or a hydrogen ion. Hydrogen. Oxygen. Platinum. RD Forward reaction (Reduction) OX Backward reaction (Oxidation) xv Figure.1.10. ECCE test bed .............................................................................................................................. Figure.1.11. F-City H 2 test bed....................................................................................................................... Figure1.12. Mobypost vehicle ........................................................................................................................ Figure1.13. Passive cascaded battery/UC system. ...................................................................................... Figure1.14. Active cascaded battery/UC system. ........................................................................................ Figure1.15. Parallel active battery/UC system. ............................................................................................ Figure.1.16. Multiple-input battery/UC system. ......................................................................................... Figure.1.17. Multiple-input battery/UC systems. ........................................................................................ Figure.2.2. Equivalent electrical circuit model of PEM FC. ............................................................ Figure.2.3. Cell potential loss at different temperature. ................................................................... Figure.2.4. Cell potential losses at different pressure. ...................................................................... Figure.2.5. Energy inputs and output for FC as an energy conversion device. ................................Figure.2.6. Polarization curve for a cell of a PEM fuel cell. .............................................................Figure.2.7. The charge double layer at the surface of a fuel cell. ......................................................Figure.2.8. Polarization Curve with different losses. .......................................................................Figure.2.9. Block diagram of the multi-physical modeling of FC. ....................................................Figure.2.10. Activation losses at different temperature. .................................................................. Activation losses in FC at different pressure................................................................ Figure.2.15. Concentration losses in FC according to changing pressures. ...................................... Figure.2.16. Voltage losses in FC at different pressure. ................................................................... Figure.2.17. Resistive losses at different Humidity. ......................................................................... Figure.2.18. Cell Voltage losses at different Humidity. ....................................................................Figure 2.19. Fault action depend on the system. ............................................................................. Figure.2.20. Scheme for fault-Tolerance strategies. ......................................................................... Figure.2.21. Model base fault diagnosis diagram. ............................................................................ Figure.2.22. Schematic representation of EIS applied to fuel cell characterization. ......................... Figure.2.23. Circuits model according to EIS [2.43]. ....................................................................... Figure.2.24. Bode plot of the impedance spectra simulated in the frequency range from 10 MHz to 10 kilohertz. ...................................................................................................................................Figure.2.25. Original arrange for HFR and EIS measurement techniques. ...................................... List of Figures Figure.2.11. Resistive loss in FC at different temperature. .............................................................. Figure.2.12. Concentration losses in FC at different temperature. ................................................... Figure.2.13. Cell Voltage losses at different temperature. ................................................................ xvi Figure.2.14. Figure.2.26. Ac resistance measurement diagram with combination load parallel with mille-ohm meter. ............................................................................................................................................. Figure.2.27. Faults classification process in PEM FC. ..................................................................... Figure.2.28. Overview of the wide range of dynamic processes in FC [2.44]. .................................. Figure.2.29. Multilayer feed forward neural network. ...................................................................... Chapter III: Figure.4.6. Interface panel. ........................................................................................................... Figure.4.7. Panel for Settings" the type fuel cell for test. ............................................................... Figure.4.8. Test bench structure. .................................................................................................. Figure.4.9. Electronic load for simulate of driving cycle. ....................................................................... Figure.4.10. Single PEM fuel cell and accessory. ........................................................................... Figure.4.11. Thermocouple type K. .............................................................................................. Figure.4.12. The relationship between the seek voltage and temperature. ..................................... Figure.4.13. DPI 620 advance measurements devise. .................................................................... Figure.4.14. Process of the calibration of thermocouples with reference thermocouple. ............... Figure.4.15. The Cannes Pyrometrques type 14-reference thermocouple. ..................................... Figure.4.16. Comparing the reference and 12 thermocouples in different temperatures. ............... Figure.4.17. Voltage sensors directly connect to the graphic block in cathode and anode side. ..... Figure.4.18. Test bench structure with thermocouple and voltage sensors. ................................... ). O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................................................... Figure.4.38. Validation simulation and experimental test for one cell. ........................................... Figure.4.39. Temperature measurements in cathode side of PEMFC in different current density cell temperature with O 2 stoichiometry of 3, 4 and 5 for H 2 of 1.5. ..................................................... Figure.4.40. Voltage measurements in cathode side of PEMFC in with O 2 stoichiometry of 3, 4 and 5 for H 2 of 2. ................................................................................................................................ Figure.4.41. Validation simulation and experimental test for two cell. ........................................... Figure.4.42. The MES FC system used for validating tests. ........................................................... Figure.4.43. Test bench of fuel cell in the chamber room. ............................................................ Figure.4.44. Position of the thermal sensors. ................................................................................ Figure.4.45. Current dynamic profile of FC under test. ................................................................. Figure.4.46. Comparison thermal equation between analytical equation and three points in experimental test. ......................................................................................................................... Figure.4.47. The cell voltage versus current for MES PEMFC. ..................................................... Figure.4.48. The comparison voltage between experimental test and simulation according to load dynamic profile. ........................................................................................................................... Figure.4.49. Test bench of experimental tests on the FC: (a) laboratory, (b) climatic chamber. ...... Figure.4.50. Load profile. ............................................................................................................................. Figure.4.51. (a) Validation of electrical model, (b) validation of thermal model. ................................ Figure.4.52. Comparison of polarization curves for different temperatures. ....................................... Figure.4.53. Test bench instrument devices. ................................................................................. Figure.4.54. Thermal impact on PEMFC stack. ............................................................................ Figure.4.55. Variable temperature and voltage based on different current during stack fuel cell. ... Figure.4.56. Schematic of position of the sensor of voltage and temperature in PEMFC stack. .... Figure.4.57. Experimental results of disparate of temperature in different position of the PEMFC. ..................................................................................................................................................... Chapter V: Frenchman Gustave Trouvé built the first electric vehicle in 1881. It was a tricycle powered by a 0.01 HP DC motor fed by lead acid battery. A vehicle likewise to this was built in 1883 by two British professors. Due to, the low power and speed they never became commercial. The first commercial electric vehicle was Morris and Salom's Electroboat. It could be used for three shifts of 4 h with 90-min recharging periods. It had a maximum speed of between 32 km/h and a 40-km autonomous with 1.5 HP motors. Remarkable technology in this decade was a regeneration braking energy that it was invented by Frenchman M.A. Darracq on his 1897 coupe. Furthermore, the first electrical vehicles had reached 100 km/h which was built by Frenchman Camille Jenatzy. With the advent of gasoline, automobiles, which provided more power and more flexibility the electric vehicles, started to drop out of sight. The last commercial electric vehicles were issued around 1905. During nearly 60 years, the only electric vehicles sold were common golf carts and delivery vehicles. . The F-City H2 (see Figure1.11) is the result of a partnership between the Michelin Research and Innovation Center, the French automotive producer FAM Automobiles, EVE Systems, FC LAB and the Institute Pierre Vernier. The fuel cell range extender has an energy capacity of 15 kWh and works in a power pack alongside a 2.4 kWh lithium ion battery. The Energy Pack is initially installed on the F-City H2 car designed by French automotive producer FAM automobiles. The F-City is an innovative solution for urban transport. The power modules are consisting of 4KW PAC, 1 KG hydrogen stored at 350 bars, lithium-ion battery(2.4 kWh). The fuel cell energy pack weighs 120 kg. Michelin's Energy Pack containing battery and fuel cell range extender offers a significantly improved performance over the original NiMH battery with overall energy density almost quadrupled. The range of the F-City H 2 is 150 km (93 miles). Partners: French company FAM Automobiles and EVE System, Swiss Research unit Michelin, French research units Institute Pierre Vernier, FC-Lab/UTBM and a Swiss high school. In addition, Funded by Europe (FEDER) with French authorities and Swiss regional state The F-City H 2 vehicle, is presented in Figure. 1.11 [1.6] . Table . 1.1. A brief of contemporary fuel cell characteristics [1.11]. . PEFC AFC PAFC MCFC SOFC Electrolyte Ion Exchange Membranes Mobilized or Immobilized Potassium Hydroxide Immobilized Liquid Phosphoric Immobilized Carbonate Liquid Molten Ceramic Operating Temperature 80°C 65°C -220°C 205°C 650° 600-1000°C Charge Carrier H + OH - H + CO3 = O = External Reformer for Yes Yes Yes No No CH4 (below) Prime Cell Components Carbon-based Carbon-based Graphite-based Stainless-based Ceramic Catalyst Platinum Platinum Platinum Nickel Perovskites Product Water Management Evaporative Evaporative Evaporative Gaseous Product Gaseous Product Product Heat Management Process Gas + Independent Cooling Medium Process Gas + Electrolyte Circulation Process Gas + Independent Cooling Medium Internal Reforming + Process Gas Internal Reforming + Process Gas Cell: Cathode: Anode: Cell: Cathode: Anode: Cell: Cathode: Anode: Cell: Cathode: Anode: Cell: Cathode: Anode: Electrochemical Reactions Table of contents of Chapter II 1. PEMFC modeling ................................................................................................................ 1 .1. Empirical model .......................................................................................................................... 1.2. Mechanistic model ...................................................................................................................... 1.3. Analytical model .......................................................................................................................... 1.4. Consideration of different modeling ........................................................................................ 2. Fuel cell basic characteristics .............................................................................................. 2 .1. Effect of temperature ................................................................................................................. 2.2. Effect of Pressure ....................................................................................................................... 2.3. Theoretical FC Efficiency: ......................................................................................................... 2.4. Fuel Cell voltage losses .............................................................................................................. 2.5. Exchange Current Density ........................................................................................................ 2.6. Static characteristic (polarization curve) .................................................................................. 2.7. Effective factor in concentration losses .................................................................................. 2.8. Polarization Curve ...................................................................................................................... 2.9. Thermal Domain ......................................................................................................................... 3. Effect of the operating condition on performance of the fuel cell ...................................... 3.1. Temperature ................................................................................................................................ 3.2. Pressure ........................................................................................................................................ 3.3. Humidity ...................................................................................................................................... 4 . PEMFC diagnosis: ............................................................................................................... 4 .1. Introduction of Fault Diagnosis ............................................................................................... 4.2. PEMFC Fault Conditions .......................................................................................................... 4.3. Faults Tolerance strategies ......................................................................................................... 4.4. Diagnosis levels ........................................................................................................................... Table . 2.1. Effects of operating conditions on the PEMFC. Operating Parameter I . N/A Yes Yes Yes T Yes Yes Yes B, Yes Yes 4 B, Yes Yes, minor 5 B, N/A Yes 6 B, N/A Yes, minor 7 B, N/A 1 8 11 N/A 2 9 12 N/A Yes (3) 10 13 Table . 2.3. Summary of failure modes in the PEMFC. Water management in PEMFC . Cathode Anode Flooding Drying Flooding Drying Water production Water evaporation low temperature and high condensation ( low current) Water evaporation Electro osmosis Back-diffusion (Low current density) Back-diffusion (low current density) Electro osmosis Saturated Injection water In brief, we need to avoid membrane drying at anode side and flooding at cathode side Effects of operation condition Humidity humidify inlet gas > 40% Flow rate Flow rate due to higher stoichiometry remove the flooding. low flow rate is low risk drying Temperature increasing temperature resolve the flooding problem in the cell Pressure water produced at cathode can be removed by high pressure Current decreasing current cause the reduction the flooding Degradation of FC in long term operation condition Corrosion Contamination Gas starving Freezing high temperature Cathode corrosion Anode Contamination Hydrogen starving Start up from freezing Anode corrosion Contamination of the membrane Oxygen starving Corrosion of gas diffusion layer Corrosion the bipolate Table of contents of Chapter III 1. Introduction ......................................................................................................................... Table . 3.1. Principal physical phenomena found in PEMFC. . Table.3.2 and Table.3.3 indicates the obtained temperatures, voltage and current distributions. The considered conditions measurements are the following: Table . 3.2.Temperature and voltage distributions Table.3.3.Current distributions calculated by the Newton Raphson method . Table . 3.4. Internal Impedance calibration according to physical failing. . R1=0.132(ohm) R2=0.133(ohm) R3=0.134(ohm) R4=0.138(ohm) R5=0.135(ohm) R6=0.136(ohm) R7=0.14(ohm) R8=0.136(ohm) R9=0.135(ohm) Table . 3.7. Calculation current density in cell one based on voltage and temperatures which are recorded experimentally . Table . 3.8. Calculation current density in cell two based on voltage and temperatures which are recorded experimentally . Table.4.1 is shown the technical data of the MES PEMFC. Table . 4.1. Description of the MES single cell. . Table . 4.2. Specification of cathode side. Cathode side : Ambient air Close to ambient pressure : 20-30mbar over pressure Stoichiometry: 3-4 . Table . 4.3. Specification of anode side. Anode side : Dry hydrogen 4.5 standard dead-end mode ( a 0.5sec purge every 20 sec) 0.55 bar of over pressure . Table . 4.4. Type of MEA used. . Typology 3 layer MEA Membrane thickness 18 [μm] Anode electrode Pt loading 0.1 [mg/cm^2] Anode electrode thickness around 8 [μm] Cathode electrode Pt loading 0.4 [mg/cm^2] Cathode electrode thickness around around 15 [μm] Table . 4.5. Type of GDL used. . Thickness 0.42 [mm] Density 125 [g/m^2] Air permeability 3 [ cm^3/cm^2*sec] Resistivity <15 [mOhm*cm^2] PTFE loading 5% Table . 4.6. Comparison 12 thermocouple measurements with reference thermocouple. . Initial temperature 38 C 45 C 50 C 55 C 62 C 1 37.32 44.16 49.28 54.05 61.2 2 37.62 44.54 49.28 54.17 61.51 3 37.6 44.61 49.37 54.34 61.51 4 37.63 44.54 49.5 54.37 61.6 5 37.18 44.47 49.17 54.24 61.51 6 37.14 44.18 49.35 54.09 61.26 7 37.3 44.47 49.26 54.2 61.22 8 37.15 44.36 49.34 54.13 61.28 9 37.54 44.42 49.43 54.31 61.41 10 36.97 44.36 49.39 54.14 61.26 11 37.48 44.44 49.15 54.14 61.12 12 37.38 44.65 49.41 54.48 61.64 Reference 37.65 44.8 50.46 55.04 62.54 Table . 4.7. Comparing means errors in different temperatures between 12 thermocouples one by one. . Number 1(%) 2(%) 3(%) 4(%) 5(%) 6(%) 7(%) 8(%) 9(%) 10(%) 11(%) 12(%) 1 0 0.47 0.62 0.58 0.14 0.08 0.3 0.2 0.52 0.81 0.32 0.59 2 0.47 0 0.15 0.12 0.33 0.55 0.3 0.42 0.1 0.43 0.3 1.58 3 0.62 0.15 0 2.62 0.48 0.7 0.32 0.42 0.06 0.43 0.26 0 4 0.58 0.12 2.62 0 0.44 0.66 0.28 0.38 0.06 0.05 0.26 0.45 5 0.14 0.33 0.48 0.44 0 0.22 0.16 0.06 0.38 0.05 0.19 0.45 6 0.08 0.55 0.7 0.66 0.22 0 0.38 0.28 0.6 0.11 0.4 0.67 7 0.3 0.17 0.32 0.28 0.16 0.38 0 0.1 0.22 0.11 0.02 0.29 8 0.2 0.27 0.42 0.38 0.06 0.28 0.1 0.1 0.32 0.01 0.19 0.07 9 0.52 0.05 0.1 0.06 0.38 0.6 0.22 0.32 0 0.33 0.19 0.07 10 0.81 0.28 0.43 0.39 0.05 0.27 0.11 0.01 0.33 0 0.13 0.4 11 0.32 0.14 0.3 0.26 0.19 0.4 0.02 0.12 0.19 0.13 0 0.26 12 0.59 0.12 1.58 0 0.45 0.67 0.29 0.39 0.07 0.4 0.26 0 Table . 4.8. Basic operating conditions for voltage measurements. . Description Values Stoichiometry air 1.5 Stoichiometry hydrogen 3 Number of the cell 1-2 Surface area 62 Cm 2 T° max 65 °C of the current distributions between different current profile loads are shown in this table. Table . 4.9. Distributions current density for different current profile in outlet, inlet and middle of cell. . Newton Raphson Left middle Right Load Current Current Density Distributions Load Current inlet 0.485 0.448 0.492 5 (A) middel 0.612 0.497 0.756 5 outlet 0.61 0.617 0.482 inlet 1.077 1.042 1.033 10 (A) middel 1.106 1.071 1.163 10 outlet 1.177 1.275 1.056 inlet 1.732 1.642 1.646 15 (A) middel 1.664 1.645 1.825 15 outlet 1.807 1.324 1.715 Table . 4.10. Calculations of impedances in x, y and xy axis. . 5 x axis y axis xy axis (A) R11[Ω] 0.00133 R14[Ω] 0.00384 R15[Ω] 0.000223 R23[Ω] 0.00151 R25[Ω] 0.00154 R24[Ω] 0.00514 R45[Ω] 0.00360 R36[Ω] 0.00262 R26[Ω] 0.004125 R56[Ω] 0.0025 R47[Ω] 8.46E-05 R35[Ω] 3.92E-05 R78[Ω] 4.21E-05 R58[Ω] 0.00353 R48[Ω] 4.21E-05 R89[Ω] 0.00394 R69[Ω] 0.00298 R57[Ω] 0.00351 R59[Ω] 0.00039 R68v 0.00097 x axis y axis xy axis 10 R11[Ω] 0.00097 R14[Ω] 0.000485 R15[Ω] 0.00045 (A) R23[Ω] 0.000189 R25[Ω] 0.00051 R24[Ω] 0.001452 R45[Ω] 0.00093 R36[Ω] 0.00243 R26[Ω] 0.00259 R56[Ω] 0.00208 R47[Ω] 0.00127 R35[Ω] 0.00032 R78[Ω] 0.001656 R58[Ω] 0.00383 R48[Ω] 0.0029 R89[Ω] 0.00383 R69[Ω] 0.00206 R57[Ω] 0.00219 R59[Ω] 2.77E-05 R68[Ω] 0.00178 x axis y axis xy axis 15 R11[Ω] 0.00161 R14[Ω] 0.00106 R15[Ω] 0.00158 (A) R23[Ω] 0.00050 R25[Ω] 3.22E-05 R24[Ω] 0.00055 R45[Ω] 0.00051 R36[Ω] 0.00255 R26[Ω] 0.00303 R56[Ω] 0.00300 R47[Ω] 0.00195 R35[Ω] 0.00047 R78[Ω] 0.0020 R58[Ω] 0.00448 R48[Ω] 0.00398 R89[Ω] 0.00316 R69[Ω] 0.00165 R57[Ω] 0.00246 R59[Ω] 0.00135 R68[Ω] 0.00151 of the current distributions between different current profile loads are shown in this table. Cell one Cell two Between cell one -two R14[Ω] 0.0015 R14[Ω] 5.0816e-4 R1211[Ω] 6.05e-4 R47[Ω] 0.0017 R47[Ω] 2.029e-4 R1244[Ω] 2.4e-4 R1277[Ω] .0018 Cell Cell Between cell one two one -two 10(A) R14[Ω] 0.0013 R14[Ω] 8.07e-4 R1211[Ω] 4.207e-4 R47[Ω] 0.0015 R47[Ω] 3.686e-4 R1244[Ω] 9.559e-5 R1277[Ω] .002 Cell Cell Between cell 15 (A) one R14[Ω] R47[Ω] 0.0014 0.0014 two R14[Ω] R47[Ω] .0017 7.4351e-4 one -two R1211[Ω] R1244[Ω] 6.04e-4 2.64e-4 R1277[Ω] Table . 4.12. Technical characteristics of the PEMFC. . Parameters Values Number Of Cell, Ncell 40 Stack Weight, mstack 2.2 (Kg) Stack Area 21317480 (mm) Anode volume 4500(mm 2 ) Cathode volume 6800 (mm 2 ) Membrane thickness, tm 18 (µm) Active area, A 61.48 (cm 2 ) Table . 4.14. Temperature and voltage obtained by experimental test Ballard Nexa stack. . Table . 4.15. Resistance calculations of the stack FC . . Table of contents of Chapter V 1. General Diagnosis Strategy of FCEV drive trains ............................................................ Table . 5.1. Global strategy of supervision and diagnosis on the power train in FCEV. . Th harmonic of the output signal of PEMFC. This algorithm (Fast Fourier Transform) can be used for on-line failure detection. Because of computation time hundred times is faster than other algorithms. A fast Fourier transform (FFT) is an algorithm to compute the discrete Fourier transform (DFT) and it's inverse. Fourier analysis converts time (or space) to frequency and vice versa; an FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. In other words, Fast Fourier Transform. The FFT is a faster version of the Discrete Fourier Transform (DFT) what is the DFT? Table . 5.2. Classification fault and normal mode in Fuel cell. . Table . 5.3. Classification flooding and drying. . Table . 5.6. Local Current density distributions in different nodes based on various faults of 5 current profiles. . Table . 5.7. Local Current density distributions in different nodes based on various faults of 10 current profile. . Table . 5.8. Local Current density distributions in different nodes based on various faults of 15 current profiles. Chapter V: Diagnosis of PEMFC within FCEV powertrain . Page | 176 Figure.3.5. Transverse view (x, y axis) of the anode side with 9 nodes and 20 different resistances. Eq3.[START_REF] Maggio | Modeling polymer electrolyte fuel cells: an innovative approach[END_REF] Figure.3.13. Perspective view of the 3D proposed model for PEMFC stack Figure.3.14. Top view of the 3D proposed model for PEMFC stack Table.4.11. Resistance calculations of the two cell. 5 (A) Acknowledgments iii Acknowledgments I am very grateful with my PhD committee for the interest they showed in my work. Prof. BEN AMMAR Faouzi and Dr. Mélika HINAJE for their review of the manuscript and the suggestions they made to improve it. Prof. Bacha SEDDIK, Prof. Daniel HISSEL and Dr. Rachid OUTBIB were examiners of the PhD for their participation and their interesting questions during my thesis defense. The next step of this work is to develop a diagnosis algorithm based on the developed ANN model and the general context of the FCEV drivetrain supervision, diagnosis but also the management of the degraded modes. The modeling process has to be continued to enhance the knowledge on different technologies of PEMFCs and the accuracy of the 3D models. This can be achieved through investigating more the modeling for single and stack fuel cell notably to do calibrations in faulty operating conditions. The aim is to propose diagnosis and control strategies in both healthy and degraded modes to improve the lifetime of the FC system and the reliability of the FCEVs drivetrains. However, all current densities do not collected to current collector layers. Some currents are passed in X, Y direction of the cell (Z direction in stack) as shown in Table.3.6. These lost are simulated by connection resistors different directions. The amount of the latter's can be computed by the derivation of the voltage in two adjacent nodes divided by the average value of the current densities of these nodes. For example the connection resistors in cross section (X, Y) can be calculated by formulation bellow: Where: V 1 , V 2 , V 4 and V 5 : Nodes voltages which are recorded experimentally. x 1 , x 2 , x 4 and x 5 : Nodes current densities which are calculated by the Newton Raphson. Chapter IV Experimental Validation of the 3D Model in Healthy Mode Nexa stack set-up At this stage of research work a second validation set up has been carried out. The used FC is the commercial BALLARD Nexa Stack fuel cell rated at 1.2 KW with 47 cells. A compressor (blower) is installed to feed The Nexa stack by pure hydrogen and a low-pressure compressed air. The anode channels operate at the "dead end" mode. As for the Nexa stacks, the hydrogen at the anode inlet is not humidified. The entire stack is cooled with a forced air flux in the cooling channels. Table.4.13 summarizes the main configurations and operating conditions of the Nexa stack fuel cell [4.3]. Measuring equipment During the experimental test, the Nexa stack's integrated control board takes most of the data measurements. These measurements include the air temperature at the inlet; stack current, stack output voltage and so on. However, the Nexa stack's control board does not measure the temperature and voltage of the individual cells. In order to get this information, some complementary instruments were added (as shown in.
299,303
[ "1183932" ]
[ "227671" ]
01492936
en
[ "spi" ]
2024/03/04 23:41:50
2015
https://hal.science/hal-01492936/file/Submission.pdf
Vikram Bhattacharjee email: vikramju65@gmail.com Debanjan Chatterjee Permual Raman A shield based thermoelectric converter system with a thermosyphonic heat sink for utilization in wood-stoves Keywords: Thermoelectric Power Generator, thermosyphonic heat sink, shield, wood-stoves, conversion efficiency The Thermoelectric Power Generators (TEG) are solid state devices which utilize temperature gradients to produce electrical energy. In domestic wood-stoves, these devices have carved out a niche for themselves and can be used for generation of electricity in rural areas. This paper presents the design of a shield based thermoelectric power generation system consisting of a thermosyphonic heat sink, for utilization in wood stoves. The average current density of the TEG module improved by 28.3% and 22.3% when compared to the conventional plate-fin heat sink based converter system and a simple single loop thermosyphonic heat sink based converter system respectively. The converter system achieved a maximum power output of 3.2 Watts along with a maximum conversion efficiency of 5.05 % which was higher than the conventional heat sink based module systems in wood burning stoves. An optimal shield thickness of 6 cm reduced the steady state hot side temperature below the permissible limit and an optimal coolant velocity of 8 m/sec ensured efficient removal of heat from the cold side of the generator. Introduction According to WHO around 3 billion people are utilizing simple biomass as a source of fuel for domestic cooking at present [START_REF]WHO Report on biomass consumption[END_REF]. Rural areas, where wood is the main source, domestic wood-fired stoves are being heavily used. In addition to the climatic conditions the rural homes also suffer from uneven distribution of reliable electrical power supply from the grids. As a solution to these problems researchers have investigated the concept of modelling and reconstructing these systems with integration of converter systems utilizing thermoelectric generators for power generation purposes [START_REF] Nuwayhid | Low cost stove-top thermoelectric generator for regions with unreliable electricity supply[END_REF][START_REF] O'shaughnessy | Small scale electricity generation from a portable biomass cookstove: Prototype design and preliminary results[END_REF][START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF][START_REF] Jiang | Experimental study of a plat-flame micro combustor burning DME for thermoelectric power generation[END_REF][START_REF] Ma | Waste heat recovery using a thermoelectric power generation system in a biomass gasifier[END_REF][START_REF] Lertsatitthanakorn | Study of Combined Rice Husk Gasifier Thermoelectric Generator[END_REF][START_REF] Nuwayhid | Design and testing of a locally made loop-type thermosyphonic heat sink for stove-top thermoelectric generators[END_REF][START_REF] Nuwayhid | Development and Testing of a Domestic Woodstove Thermoelectric Generator with Natural Convection cooling[END_REF][START_REF] Raman | Development, design and performance analysis of a forced draft clean combustion cookstove powered by a thermo electric generator with multi-utility options[END_REF][START_REF] Killander | A stove-top generator for cold areas[END_REF]. Nuwayhid et al. [START_REF] Nuwayhid | Low cost stove-top thermoelectric generator for regions with unreliable electricity supply[END_REF] studied the performance characteristics of a low cost stove top thermoelectric power generator where the evaluation led to the design of Peltier modules to produce maximum power for different utilities. In [START_REF] O'shaughnessy | Small scale electricity generation from a portable biomass cookstove: Prototype design and preliminary results[END_REF] a small scale electricity generation system was achieved using biomass cook stoves. The prototype produced a total power of 5.9 W and the electricity was utilized to power a 3.3 V Lithium Ion battery. Lertsatitthanakorn [START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF] designed a biomass cook-stove combined with a TEG which gave a net power output of 2.4 Watts. A conversion efficiency of 3.2% enabled the system to light up a low power incandescent bulb . Jiang et al. utilized a TEG system in a plat-flame micro combustor burning dimethyl ether and giving an output power of 2 Watts with a conversion efficiency of 1.25 % .The system sustained a stable premixed flame and achieved a low wall temperature thereby reducing heat loss from the combustion system [START_REF] Jiang | Experimental study of a plat-flame micro combustor burning DME for thermoelectric power generation[END_REF]. In [START_REF] Ma | Waste heat recovery using a thermoelectric power generation system in a biomass gasifier[END_REF] a Bi2Te3 based TEG system consisting of 8 modules was used in a biomass gasifier for improved waste heat recovery, giving a maximum power output of 6.1 Watts. A rice husk gasifier coupled with a TEG system on the gasifier wall was tested in [START_REF] Lertsatitthanakorn | Study of Combined Rice Husk Gasifier Thermoelectric Generator[END_REF] where at a temperature difference of 60 °C the output power of the system was 3.9 W along with a conversion efficiency of 2.01%. In [START_REF] Nuwayhid | Design and testing of a locally made loop-type thermosyphonic heat sink for stove-top thermoelectric generators[END_REF] a TEG powered wood-stove was designed where the cold side was coupled to a loop-type thermosyphonic heat sink using water as a coolant. The system generated a total output power of 3 W making the system commercially viable for low power applications. A domestic wood stove fitted to a TEG unit working under natural convection produced a power output of 4.2 W. It was deduced that the use of multiple modules with a single heat sink reduces the power output when compared to that of a single module due to reduced temperature difference between the hot and the cold sides of the unit [START_REF] Nuwayhid | Development and Testing of a Domestic Woodstove Thermoelectric Generator with Natural Convection cooling[END_REF]. In [START_REF] Raman | Development, design and performance analysis of a forced draft clean combustion cookstove powered by a thermo electric generator with multi-utility options[END_REF] a performance evaluation was carried out to study a forced draft clean combustion cook-stove where the power output of the TEG was 4.5 Watts with a temperature difference of 240 ℃. Killander et al. [START_REF] Killander | A stove-top generator for cold areas[END_REF] designed a cook stove consisting of two Hi-ZHZ modules whose cold side was maintained by a cooling fan. A DC-DC converter was used to step up the output voltage of the TEG and the stove produced a net power output of 10 Watts. Based on the literature review it can be deduced that the performance of the thermoelectric generators in wood stoves is mainly dependent on the following factors like the temperature difference between the hot and the cold sides of the TEG and the design of the heat sink.However, the conventional heat sink based TEG-wood stoves suffer from reduced conversion efficiencies due to reduced temperature differences between the hot and cold sides as a result of increased hot side temperatures above the recommended limit for a generator and inefficient heat dissipation through the fins from its cold side. Hence the objective of this study is to present the design of a new shield based thermoelectric converter system coupled with a single loop thermosyphonic heat sink design for utilization in wood stoves where the additional conductive resistance of the shield would prevent the overheating and damage of the module by maintaining the hot side temperature within the permissible limit and the high specific heat intake of the water in the thermosyphonic heat sink would ensure efficient heat removal from its cold side. The research methodology and the design optimization strategy have been presented in this paper. Nomenclature TEG Thermoelectric Power Generator T cold Cold side temperature (K) T hot Hot side temperature (K) Thermoelectricity Background Thermoelectric Effect was first discovered by Seebeck [START_REF] Riffat | Thermoelectrics:A review of present and potential applications[END_REF] in the year 1822. The "Seebeck Effect" principle states that when a temperature difference is maintained across the junctions of two dissimilar metals, a voltage is generated. Thermoelectric Modules, also called the Thermoelectric Power Generators are a combination of a pair of n and p type semiconductors which are combined electrically in series and thermally in parallel and are alternately arranged to ensure unidirectional career transport .The negative loading of the n type elements and the positive loading of the p type elements finally constitute the electrical power output from the system. The whole assembly is supported by two ceramic plates for mechanical support. Having a high thermal conductivity, ceramic allows efficient heat transfer from the hot to the cold side thereby ensuring a high conversion efficiency of the module. Module Parameters The principle parameters that determine the performance of a thermoelectric power generator are the net output power, the maximum conversion efficiency and the hot and cold side temperatures of the TEG unit. The maximum conversion efficiency, theoretical maximum power output, the voltage and the output current can be determined on basis of the contact resistances and are given by ( 1) and ( 2) respectively [START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF].     1 2 hot cold hot cold 2 hot hot T T 2 T T 4 1 (2 0.5 T T Z 2 c c Ln rL L T L rL                                      (1)     2 2 2 2 2 1 hot cold c NA T T P rL Ln L             (2) Typically the value of 𝐿 𝑐 ,n, and r , are constants for a module and depending on the material Bi-Te and the temperature difference used and were taken from [START_REF] Lertsatitthanakorn | Electrical performance analysis and economic evaluation of combined biomass cook stove thermoelectric (BITE) generator[END_REF]. Here 𝐿 𝑐 = 0.8 𝑚𝑚 and 𝛼 =2.1226 * 10 -4 V K -1 , n=0.1 mm, r=0.2, and L=1.2 mm and 𝜌 = 2.07 * 10 -3 Ω cm. and Z= 2.75 *10 -3 𝐾 -1 . 3.Experimental Setup Converter System Design The Thermoelectric Converter System consisting of a single module was designed in a manner such that the hot side of the TEG unit is not directly exposed to the incoming heat energy from the source. Rather it was attached to an 15 cm cylindrical copper rod which is in direct contact with the heat source. There exists a shield between rod and the hot side of the TEG. The shield adds an additional conductive resistance to the system and hence lowers the hot side temperature below the permissible value and prevents the damage of the module due to sudden outburst of heat energy from the source. The cold side was attached to a single loop thermosyphonic system with water as the coolant.Cold water at 13 °C flowed from the reservoir whose volume was kept constant at 2 litres from an external water supply. The cooling system was a stainless steel box having dimensions 10 cm x 10 cm x 5 cm. The TEG was supported in a small socket on the surface of the coolant chamber and the rod and the shield assembly was supported by the help of Magnetic Sockets as shown in the Figure 1 . The chamber had two openings on one pair of its opposite faces. Both the openings were provided with valves and pipes for the passage and control of the coolant flow velocity. In this study a Bi2Te3 TEG module having dimensions 30 mm x 30mm x 3.3 mm was selected. The maximum hot side and cold side temperature of the module was 300℃ and 30℃ respectively. The value of thermal conductivity, the Seebeck coefficient and the electrical conductivity for the material were taken from [START_REF]Thermoelectric Engineering Handbook[END_REF]. Stove Geometry The chamber had a squared opening (4cm x 4cm) at the bottom for the entry of air inside the system. Wood pieces (of dimensions 1.5 inch x 1.75 inch) were used for ignition inside the combustion chamber. Initially a total of 250 gm. of wood chips occupied 1 3 𝑟𝑑 of the chamber volume. The chamber was made to operate in batch mode and wood was added eventually when the temperature dropped. The cylindrical rod of the converter system was inserted into the chamber through a hole (1 cm I.D). The length of the rod inside the chamber was 10 cm. The temperature was measured by the help of three standard temperature sensors attached to the display. Air was forced into the chamber via a 5 V blower from beneath through a narrow opening to ensure efficient combustion. A similar 5V fan was attached as a load with the TEG whose RPM was measured and controlled throughout the experiment. . The experimental setup has been shown in the following Figure 2. Conventional Heat Sink Designs for performance assessment The performance of the converter was compared with the performance of a two conventionl heat sink designs.In the first design the coolant chamber was replaced with a aluminium based rectangular plate-fin heat sink having a fixed number of fins.The second design was a simple single loop thermosyphonic system with no shield in between the hot side and the stove wall.The dimensions of the coolant chamber in a simple single loop thermosyphonic system was similar to that in the proposed design.The material properties for the stove geometry and the converter system designs were taken from [START_REF]Thermal conductivity of metals[END_REF] and the instruments used during the experiment have been tabulated below along with their respective specifications in Table 1 and Table 2 respectively. Guiding Equations In order to estimate its analytical performance, a mathematical analysis of the converter system was carried out by defining the flow and energy equations with appropriate boundary conditions for its different components. Inside the stove the flow of the inlet air was modelled using the Reynolds-Averaged Navier Stokes equations [START_REF]RANS model[END_REF] including the k-∈ turbulent model [START_REF]The k-∈ turbulent model Available from[END_REF].The equations of conjugate heat transfer (3-9) involved viscous effects and the effect on the temperature profile of the flue gas due to heat generation from the heat source inside the chamber.          22 ρ ρ . .( P . ρk 33 T t                        u u u I μ μ u u μ μ u I I F t t (3)   μ k ρ ρ . k . μ k ρ t r k P t                    u (4)   ρ . ρ 0 t      u u   2 μ ρ ρ . . μ 1.44 1.92ρ kk t r k P t                      u (6) 2 k μ 0.09ρ. t   (7)   2 22 ( )) ( 3 μ : ( . ) 3 ρ. T rt Pk             u u u u u (8)          u (9) where μ 𝑡 represents the undamped kinematic viscosity,k represents the turbulent kinetic energy and ∈ is the turbulent dissipation rate .The terms 𝑘 𝑔𝑎𝑠 along with 𝜌 𝑔𝑎𝑠 represent the thermal conductivity and the density of the fluid respectively. gen q is the heat generation term which has been modelled as a non exhaustive heat source dependent on the source temperature and the production coefficient. Radiative heat transfer between ambient and a flame was previously modelled in [START_REF] Keramida | Radiative heat transfer in natural gas-fired furnaces[END_REF] which considered the radiative transfer equation (Eq.( 10)) for a gray medium incorporating the effects due to the phenomenon of scattering, absorption and emission. Given by [START_REF] Killander | A stove-top generator for cold areas[END_REF] the heat generation gen q inside the volume is a function of the average intensity ( , ) H r s of the scattered radiation where . ( , ) ( , ) extinction gen s H r s H r s q      (10) 4 ( 4 , ) absorption scattering gen source r k q T d Hs k        (11) The governing equations which determine the performance parameters of the TEG are dependent on its current density and the heat transfer through the material. The thermal conductivity, the specific heat capacity along with the density of the material of construction of the TEG, determine its performance and hence were taken into account in the analysis. The governing equation of the TEG at unsteady state is a three dimensional form which can be represented considering energy balance and current conservation [START_REF] Jang | Optimal design for micro-thermoelectric generators using finite element analysis[END_REF] .They have been elucidated below as follows. , . TEG TEG p TEG T C q q t       (17) . 0 J  [START_REF] Keramida | Radiative heat transfer in natural gas-fired furnaces[END_REF] Where 𝑞 ⃗ , 𝑞̇ and 𝐽 ⃗ ⃗⃗ represent the heat flux, the heat generation and the current density respectively. The heat flux is related to the current density and the electric field intensity vector by the equations ( 19) and ( 20) respectively. . k T TEG TEG TEG q T J      (19) T ) 1 ( TEG TEG JE     (20) Where 𝐸 ⃗⃗⃗⃗ =-∇Ω and Ω being the scalar electric potential with TEG  and k 𝑇𝐸𝐺 being the electrical resistivity and the thermal conduction of the material of construction of the TEG respectively. Substitution of the ( 19) into [START_REF]The k-∈ turbulent model Available from[END_REF] gives the final form of the governing equation which has been used for the determination of the temperature profiles and the scalar potential in each of the three phases of the experiment.   , . k T T TEG TEG TEG TEG TEG TEG p TEG T J q C t         (21) The heat generation is dependent on power loss due Joule Heating and therefore the final equation giving the temperature profile of the TEG is given by [START_REF] Boltzmann | [END_REF].   2 , . k T T TEG TEG TEG TEG TEG TEG p TEG T J J C t           (22) Where the specific heat capacity of the thermoelectric material varies with temperature according to the Equation ( 23) [START_REF] Landolt | Landolt-Börnstein numerical data and functional relationships in science and technology[END_REF]. The proposed system reduced the maximum hot side temperature of the TEG below the permissible limit of 573 K to 542 K where as the conventional plate-fin and the simple thermosyphonic heat sink system with no shield recorded a maximum hot side temperature of 584 K and 575 K respectively. The proposed system recorded a maximum temperature difference of 250 K which is comparatively higher than the conventional plate-fin heat sink and the simple thermosyphonic systems which recorded a maximum temperature difference of 195 K and 228 K respectively. From the figure it can be inferred that the energy dissipation rate from the fluid reaches a maximum value of 844 m 2 s -3 in regions near the wall in which the rod is attached. Therefore magnitude of heat flux travelling through the rod and ultimately falling on the hot side of the TEG through the shield will vary directly with the length of the part of rod inserted inside the geometry.However increasing the length of the inserted portion will increase proximity of the TEG hot side with the wall of the stove and will lead to the overheating of the device.Thus to avoid overheating and to allow optimum module performance ,the length of the inserted portion was chosen accordingly based on optimized rates of turbulent energy dissipation and the maximum hot side temperature of the module.Figure 4. shows the variation of the turbulent dissipation rate and maximum hot side temperature with increasing length of the converter.It is evident from the figure that at a length of 8 cm the average turbulent dissipation energy is high and the maximum hot side temperature is below the allowable limit of 573 K.Hence the said geometric length was chosen and kept constant during the experiment. Radiative Heat Flux shield and the flow of the heat flux is mainly along the directions which offer lower resistance due to conduction.The conductive heat flux flowing normal to the faces excluding those parallel to the walls of the TEG is thus manifested as radiation loss into the ambient. In the figure the hot side temperature of the TEG corresponding to the conductive heat flux falling on the hot side is above the maximum allowable limit up to a shield thickness of 3 cm but gradually decreases as the shield thickness increases and the radiation loss increases. However since the chief mode of heat transfer from the shield to the TEG is in the form of conduction, increasing the additional conductive resistance drastically will reduce the power output of the generator. Hence the shield thickness should be based on the optimized rates of conductive and radiative heat transfer to simultaneously prevent module overheating and ensure efficient module performance. shows the variation of the maximum output power of the TEG with shield thickness at various coolant flow velocities. In the figure the power output initially increases with increasing thickness and after reaching a maximum the power output starts decreasing with further increase in the shield thickness.The power output is minimum in the absence of shield due to overheating of the TEG hot side.As the shield thickness increases the efficiency of the module increases due to increased temperature difference between its two sides and finally starts decreasing as the conductive flux entering the module decreases with an increase in the conductive resistance of the shield.When the thickness is constant the maximum output power also increases with an increase in the coolant flow velocity till it reaches a value of 9 m/sec. Figure 6. shows that the convective heat flux removed from the cold side of the TEG becomes constant beyond a magnitude of 9 m/sec and hence the maximum power output of the TEG remains constant at 3.2 Watts as the coolant flow rate is increased further. A shield thickness of 6 cm was chosen taking into consideration the material cost and the optimized heat transfer rates in order to achieve successful prevention of overheating of the side exposed to incoming heat flux .and a coolant flow velocity of 8 m/sec was considered in the design for ensuring optimal performance of the TEG unit. 1), Figure 8. describes the variation of the conversion efficiencies for the three systems which shows that due to higher temperature differences, the module in the proposed system, having a maximum efficiency of 5.1% at a temperature difference of 250 K, reached a higher maximum conversion efficiency of 5.05 % when compared to the other two systems which recorded efficiencies upto a maximum of 0.75% and 3.5% respectively. The process parameters have been tabulated below in Table 3. A𝑚 -2 and 2.55*10 4 A𝑚 -2 and a minimum of 175 A𝑚 -2 and 96.8 A𝑚 -2 in the second and the third case respectively. Due to higher hot side temperatures the maximum current intensity in the conventional heat sink designs, is greater than that of the proposed system but the reduced temperature differences and low conversion efficiencies gradually minimize the current intensity in larger sections of the module. Due to the increased temperature difference between the two sides of the TEG the average intensity increased by 28.3 % when the shield based thermosyphonic converter system is used in place of the conventional plate-fin heat sink system and by 22.3% when the shield was added to a single open loop thermosyphonic system. Selection and assessment of variable parameters for optimum module performance 6.2.1 Effect on variation of inserted length on turbulent energy dissipation Effect on variation of shield thickness for optimum module performance Effect on variation of flow rate on boundary convective flux Conclusion A shield based thermosyphonic converter system was designed for power generation in wood stoves .The additional conductive resistance of the shield reduced the hot side temperature below the maximum allowable temperature for the hot side and prevented module overheating while the thermosyphonic system helped in the efficient removal of the heat energy from the cold side. The performance of the converter was studied for utilization in a wood stove consisting of a heat source and was compared to that of a conventional rectangular plate-fin heat sink based thermoelectric converter system and a simple single loop thermosyphonic heat sink based system. It was observed that the proposed system showed an appreciable increase in the maximum conversion efficiency and an increase in the average current density by 28.3% and 22.3% respectively. The maximum power output of the system was 3.2 Watts with a maximum conversion efficiency of 5.05% making the design viable for low power applications. Fig. 1 . 1 Fig. 1.Schematic of the Converter System Fig. 2 . 2 Fig. 2. Stove Geometry describing absorption and scattering respectively. extinction  represents the overall extinction coefficient and is the expressed as the sum of the scattering and the absorption coefficients and can be expressed by Eq.(11). 1 1 Steady State temperature Differences Fig. 3 .Fig. 4 . 34 Fig.3. Distribution Of The Turbulence Energy Dissipation Rate Inside The Flow Field Fig. 5 . 5 Fig.5.Variation of Heat Flux through Shield at different shield thicknesses .. Fig. 6 . 6 Fig.6.Variation of Convective Heat Flux from TEG cold side at different coolant flow velocities. Figure 6 .Fig. 7 .Figure 7 . 677 Figure 6. the variation of the boundary convective flux from the cold side of the TEG at different flow velocities through the chamber.The figure shows that the convective heat flux from the cold side increases gradually when the coolant velocity is gradually increased from 5 to 10 m/sec but the magnitude of the boundary convective flux from the TEG cold side becomes more or less constant at a coolant flow velocity of 9-10 m/sec. A higher coolant velocity requires larger reservoir heights and demands material costs on piping.Therefore based on the availability of water storage space and optimization of material costs ,the height of the reservoir should be judiciously chosen for achieving effective heat removal at optimum flow rates. Fig. 8 . 8 Fig.8.Comparison of the variation of Conversion Efficiency with Temperature Difference in two converter systems Fig. 9 . 9 Fig.9.Surface Plot for Steady State Distribution of Current Intensity in the TEG using three different systems Table 1 1 Instrument Specifications _____________________________________________________________________________________ Instruments used Measurement Makers Name Resolution Unit Accuracy _____________________________________________________________________________________ Digital thermometer Temperature CIE305 0.10 •C 0.10 Multimeter V/A MecoV 0.01 V ±0.05 Multimeter V/A MecoV 0.01 A ±1.10 Tachometer Speed Techmark 1 RPM ±0.05 Digital Balance Weight Sunshine 1.00 g Auto cal ibration Of fuel Instruments _____________________________________________________________________________ Table 2 Parameters of the stove [23]. ____________________________________________________________________________ Components Material of Construction Thermal Conductivity Notation W m -1 K -1 _____________________________________________________________________________________ Stove Stainless Steel 16.300 k stove Coolant Box Aluminium 204.300 -- Cylindrical Rod Copper 385.000 k rod Coolant Water 0.563 k coolant Shield Iron 71.800 k shield ____________________________________________________________________________________ 4. Mathematical Modelling Table 3 3 Tabulation of the input parameters of the model Components Symbol Value Reference Forced Convection Medium:Air Ambient Temperature (K) T0 300 Velocity (ms -1 ) 5 Surface to Surface Radiation Emissivity e amb 0.80 [21] Stefan-Boltzmann Constant σ 5.670373*10 -8 [22] (W m -2 K -4 ) Thermal conductivity of TEG k 1.20 [23] TEG material Absorption Coefficient (m -1 ) 0.50 [24] k absorption Scattering Coefficient (m -1 ) 0.01 [24] k scattering Acknowledgements The authors would like to acknowledge the technical staff of The Energy and Resources Institute, New Delhi, India, for conducting the study.
28,872
[ "1004402", "1004403" ]
[ "367774", "489692" ]
01492955
en
[ "info" ]
2024/03/04 23:41:50
2015
https://hal.science/hal-01492955/file/Meyer_HSCC15.pdf
Pierre-Jean Meyer email: pierre-jean.meyer@imag.fr Antoine Girard email: antoine.girard@imag.fr Emmanuel Witrant email: emmanuel.witrant@ujf-grenoble.fr Poster: Symbolic Control of Monotone Systems Application to Ventilation Regulation in Buildings * Keywords: I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search-Control theory, J.7 [Computer Applications]: Computers in other systems-Command and control Symbolic control, Monotone system, Application We describe an application of symbolic control to ventilation regulation in buildings. The monotonicity property of a nonlinear control system subject to disturbances, modeling the process, is exploited to obtain symbolic abstractions, in the sense of alternating simulation. The resulting abstractions consist of non-deterministic finite transition systems, for which we can synthesize supervisory safety controllers to keep the room temperatures within prescribed bounds. To choose among possible control inputs preserving safety, we consider the problem of minimizing a given cost function and apply a receding horizon control scheme. The approach has been applied to temperature regulation on a small-scale building equipped with underfloor air distribution (UFAD). To the best of our knowledge, this is the first report of experimental implementation of symbolic controllers. SYMBOLIC ABSTRACTION We consider a nonlinear control system of the form ẋ = f (x, u, w) with x ∈ R n , u ∈ R p and w ∈ R q (1) where x denotes the state, u the control input and w the disturbance input. We assume that the control and disturbance inputs are bounded in multidimensional intervals: u ∈ [u, u] * This work was partly supported by a PhD scholarship and the research project COHYBA funded by Région Rhône-Alpes. and w ∈ [w, w]. The trajectories of the system are denoted Φ(•, x0, u, w) where Φ(t, x0, u, w) is the state reached at time t ∈ R + 0 from initial state x0 ∈ R n , under piecewise continuous control and disturbance inputs u : R + 0 → R p and w : R + 0 → R q . We also assume that the system is cooperative, which is a subclass of monotone systems [START_REF] Angeli | Monotone control systems[END_REF]. Definition 1 (Cooperative system). System (1) is cooperative if for all x ≥ x , u ≥ u , w ≥ w , it holds for all t ≥ 0, Φ(t, x, u, w) ≥ Φ(t, x , u , w ), where ≥ denotes the componentwise inequality. We describe the dynamics of the sampled version of system (1) with time period τ as a non-deterministic transition system S as presented in [START_REF] Tabuada | Verification and control of hybrid systems: a symbolic approach[END_REF]. The control objective is to keep the state in an interval [x, x]. We define a symbolic abstraction of S as a finite transition system whose states are the elements of a partition of R n , P * = P ∪{Out} where P is a partition of [x, x] into intervals. The abstraction is Sa = (Xa, Xa0, Ua, -→) where the set of states Xa = P * , the set of initial states Xa0 = P, the set of inputs Ua is a discretization of [u, u], and the transition relation is given for all s = [s, s] ∈ P, s ∈ P * , u ∈ Ua by: s u -→s ⇐⇒ s ∩ [Φ(τ, s, u, w), Φ(τ, s, u, w)] = ∅. As we deal with transition systems with control inputs and non-determinism, we are interested in alternating simulation relations as behavioral relationships between S and Sa [START_REF] Tabuada | Verification and control of hybrid systems: a symbolic approach[END_REF]. The cooperativeness assumption allows us to prove the following result. Proposition 1. The symbolic abstraction Sa is alternatingly simulated by the original transition system S. As a consequence, if we design a safety controller for Sa keeping its state in P, the alternating simulation relation provides an equivalent safety controller for S keeping its state in [x, x]. SYMBOLIC CONTROL Using a classical fixed-point algorithm [START_REF] Wonham | On the supremal controllable sublanguage of a given language[END_REF], we can synthesize a supervisory safety controller C : P → 2 Ua for Sa keeping its state in P. To choose among possible control inputs preserving safety, we consider the cost function J0 defined iteratively by: where N ∈ N is the time horizon, λ ∈ (0, 1) is a discount factor, ĝ : P → R + and g : P × Ua → R + are cost functions. Then, we apply a receding horizon control scheme given by the controller for Sa: JN (s) = ĝ(s) J k (s) = min u∈C(s) g(s, u) + λ max s u -→s J k+1 (s ) C * a (s) = arg min u∈C(s) g(s, u) + λ max s u -→s J1(s ) . For the original transition system S, we define the associated controller C * given for all s ∈ P, x ∈ s, by C * (x) = C * a (s). Note that all the above computations required to obtain C * (abstraction and controller synthesis) can be done offline. We can also prove the following result showing that C * ensures safety of S with performance guarantees. λ i g(s k+i , u k+i ) ≤ J0(s k ) + λ N +1 1 -λ M where M is an upper bound of functions g and ĝ. UNDERFLOOR AIR DISTRIBUTION The UnderFloor Air Distribution (UFAD) is an alternative solution to traditional ceiling based ventilation in buildings, where the air is cooled down in an underfloor plenum and then sent into each room when needed. The system considered is based on a 4-room small-scale experimental building equipped with UFAD sketched in Figure 2. A model of the temperature variations in each room is derived from the energy and mass conservation equations in the room [START_REF] Meyer | Controllability and invariance of monotone systems for robust ventilation automation in buildings[END_REF]. The obtained model is an ordinary differential equation involving the temperature of each room (the state), the ventilation from the underfloor (control input in each room) and continuous and discrete disturbances (outside temperature, door opening, . . . ) This model is proven to be cooperative [START_REF] Meyer | Controllability and invariance of monotone systems for robust ventilation automation in buildings[END_REF] and validated by an identification procedure on the building [START_REF] Meyer | Experimental implementation of UFAD regulation based on robust controlled invariance[END_REF]. The symbolic control method is applied to this model and the resulting control strategy is implemented in the 4-room experimental building. In Figure 1 are displayed the measured temperatures (dashed blue, on the left axis) and the controlled ventilation (plain green, on the right axis) dis- cretized into 256 values. The prescribed bounds on the temperature are represented by dash-dotted horizontal lines on the figure. The symbolic abstraction was computed on a partition consisting of 10 4 intervals. The performance criterion specifies the desired tradeoff between the magnitude of the control inputs, their variations and the distance of the state to the center of the interval given by the temperature bounds, with a time horizon N = 5 and discount factor λ = 0.5. We can see that the safety specification is met: the temperatures are maintained within the prescribed bounds despite the effect of external disturbances. Figure 1 : 1 Figure 1: UFAD experiment controlled with a symbolic method. Proposition 2 . 2 Let (x0, u0, x1, u1, . . . ) be a trajectory of S controlled with C * , then for all k ∈ N, x k ∈ [x, x]. Moreover, let s0, s1, • • • ∈ P such that for all k ∈ N, x k ∈ s k . Then, it holds for all k ∈ N, +∞ i=0 Figure 2 : 2 Figure 2: 4-room flat equipped with UFAD.
7,496
[ "1231646", "7451", "3856" ]
[ "398719", "1289", "388748" ]
01199160
en
[ "info" ]
2024/03/04 23:41:50
2017
https://inria.hal.science/hal-01199160/file/epm_pami.pdf
Member, IEEE, Fr éd éric Gaurav Sharma Cordelia Jurie Fellow, IEEE Schmid Frédéric Jurie Cordelia Schmid Expanded Parts Model for Semantic Description of Humans in Still Images Keywords: human analysis, attributes, actions, part-based model, mining, semantic description, image classification. ! We introduce an Expanded Parts Model (EPM) for recognizing human attributes (e.g. young, short hair, wearing suits) and actions (e.g. running, jumping) in still images. An EPM is a collection of part templates which are learnt discriminatively to explain specific scale-space regions in the images (in human centric coordinates). This is in contrast to current models which consist of a relatively few (i.e. a mixture of) 'average' templates. EPM uses only a subset of the parts to score an image and scores the image sparsely in space, i.e. it ignores redundant and random background in an image. To learn our model, we propose an algorithm which automatically mines parts and learns corresponding discriminative templates together with their respective locations from a large number of candidate parts. We validate our method on three recent challenging datasets of human attributes and actions. We obtain convincing qualitative and state-of-the-art quantitative results on the three datasets. INTRODUCTION T HE focus of this paper is on semantically describing humans in still images using attributes and actions. It is natural to describe a person with attributes, e.g. age, gender, clothes, as well as with the action the person is performing, e.g. standing, running, playing a sport. We are thus interested in predicting such attributes and actions for human centric still images. While actions are usually dynamic, many of them are recognizable from a single static image, mostly due to the presence of (i) typical poses, like in the case of running and jumping, or (ii) a combination of pose, clothes and objects, like in the case of playing tennis or swimming. With the incredibly fast growth of human centric data, e.g. on photo sharing and social networking websites or from surveillance cameras, analysis of humans in images is more important than ever. The capability to recognize human attributes and actions in still images could be used for numerous related applications, e.g. indexing and retrieving humans w.r.t. queries based on higher level semantic descriptions. Human attributes and action recognition have been addressed mainly by (i) estimation of human pose [START_REF] Yang | Recognizing human actions from still images with latent poses[END_REF], [START_REF] Yao | Modeling mutual context of object and human pose in human-object interaction activities[END_REF] or (ii) with general non-human-specific image classification methods [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Learning discriminative representation for image classification[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF], [START_REF] Yao | Combining randomization and discrimination for fine-grained image categorization[END_REF]. State-of-the-art action recognition performance has been achieved without solving the problem of pose estimation [START_REF] Yang | Recognizing human actions from still images with latent poses[END_REF], [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF], [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2011[END_REF], which is a challenging problem in itself. Concurrently, methods have been proposed to model interactions between humans and the object(s) associated with the actions [START_REF] Yao | Modeling mutual context of object and human pose in human-object interaction activities[END_REF], [START_REF] Delaitre | Learning person-object interactions for action recognition in still images[END_REF], [START_REF] Desai | Discriminative models for static human-object interactions[END_REF], [START_REF] Gupta | Observing humanobject interactions: Using spatial and functional compatibility for recognition[END_REF], [11], [START_REF] Yao | Grouplet: A structured image representation for recognizing human and object interactions[END_REF]. In relevant cases, modelling interactions between humans and contextual objects is an interesting problem, but here we explore the broader and complementary approach of modeling appearance of humans and their immediate context for attribute and action recognition. When compared to methods exploiting human pose and humanobject interactions, modelling appearance remains useful and complementary, while it becomes indispensable in the numerous other cases where there are no associated objects (e.g. actions like running, walking) and/or the pose is not immediately relevant (e.g. attributes like long hair, wearing a tee-shirt). In this paper, we introduce a novel model for the task of semantic description of humans, the Expanded Parts Model (EPM). The input to an EPM is a human-centered image, i.e. it is assumed that the human positions in form of bounding boxes are available (e.g. from a human detection algorithm). An EPM is a collection of part templates, each of which can explain specific scale-space regions of an image. Fig. 1 illustrates learning and testing with EPM. In part based models the choice of parts is critical; it is not immediately obvious what the parts might be and, in particular, should they be the same as, or inspired by, the biologic/anatomic parts. Thus, the proposed method does not make any assumptions on what the parts might be, but instead mines the parts most relevant to the task, and jointly learns their discriminative templates, from among a large set of randomly sampled (in scale and space) candidate parts. Given a test image, EPM recognizes a certain action or attribute by scoring it with the corresponding learnt part templates. As human attributes and actions are often localized in space, e.g. shoulder regions for 'wearing a tank top', our model explains the images only partially with the most discriminative regions, as illustrated in Fig. 1 (right). During training we select sufficiently discriminative spatial evidence and do not include regions with low discriminative value or regions containing non-discriminative background. The parts in an EPM compete to explain an image, and different parts might be used for different images. This is in contrast with traditional part based discriminative models where all parts are used for every image. EPM is inspired by models exploiting sparsity. In their seminal paper, Olshausen and Field [START_REF] Olshausen | Sparse coding with an overcomplete basis set: A strategy employed by v1?[END_REF] argued for a sparse coding with an over-complete basis set, as a possible computation model in the human visual system. Since then sparse coding has been applied to many computer vision tasks, e.g. image encoding for classification [START_REF] Yang | Linear spatial pyramid matching using sparse coding for image classification[END_REF], [START_REF] Yang | Efficient highly over-complete sparse coding using a mixture model[END_REF], image denoising [START_REF] Mairal | Online learning for matrix factorization and sparse coding[END_REF], image super-resolution [START_REF] Yang | Image superresolution via sparse representation[END_REF], face recognition [START_REF] Wright | Robust face recognition via sparse representation[END_REF] and optical flow [START_REF] Jia | Optical flow estimation using learned sparse model[END_REF]. EPM employs sparsity in two related ways; first the image scoring uses only a small subset of the model parts and second scoring happens with only partially explaining the images spatially. The former model-sparsity is inspired by the coding of information sparsely with an over-complete model, similar to Olshausen and Field's idea [START_REF] Olshausen | Sparse coding with an overcomplete basis set: A strategy employed by v1?[END_REF]. Owing to such sparsity, while the individual model part interactions are linear, the overall model becomes nonlinear [START_REF] Olshausen | Sparse coding with an overcomplete basis set: A strategy employed by v1?[END_REF]. The second spatial sparsity is a result of the simple observation that many of the attributes and actions are spatially localized, e.g. for predicting if a person is wearing a tank top, only the region around the neck and shoulders needs to be inspected, hence the model shouldn't waste capacity for explaining anything else (in the image space). To learn an EPM, we propose to use a learning algorithm based on regularized loss minimization and margin maximization (Sec. 3). The learning algorithm mines important parts for the task, and learns their discriminative templates from a large pool of candidate parts. Specifically, EPM candidate parts are initialized with O(10 5 ) randomly sampled regions from training images. The learning then proceeds in a stochastic gradient descent framework (Sec. 3.3); randomly sampled training image is scored using up to k model parts, and the model is updated accordingly (Sec. 3.2). After some passes over the data, the model is pruned by removing the parts which were never used to score any training image sampled so far. The process is repeated for a fixed number of iterations to obtain the final trained EPM. The proposed method is validated on three publicly available datasets of human attributes and actions, obtaining interesting qualitative (Sec. 4.2) and greater than or comparable to state-of-the-art quantitative results (Sec. 4.1). A preliminary version of this work was reported in Sharma et al. [START_REF] Sharma | Expanded parts model for human attribute and action recognition in still images[END_REF]. RELATED WORK We now discuss the related work on modeling, in particular models without parts, part-based structured models and part-based loosely structured models. Models without parts Image classification algorithms have been shown to be successful for the task of human action recognition, see Everingham et al. [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2011[END_REF] for an overview of many such methods. Such methods generally learn a discriminative model for each class. For example, in the Spatial Pyramid method (SPM), Lazebnik et al. [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] represent images as a concatenation of bag-of-features (BoF) histograms [START_REF] Csurka | Visual categorization with bags of keypoints[END_REF], [START_REF] Sivic | Video Google: A text retrieval approach to object matching in videos[END_REF], with pooling at multiple spatial scales over a learnt codebook of local features, like the Scale Invariant Feature Transform (SIFT) of Lowe [START_REF] Lowe | Distinctive image features form scale-invariant keypoints[END_REF]. Lazebnik et al. [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] then learn a discriminative class model w using a margin maximizing classifier, and score an image as w x, with x being the image vector. The use of histograms destroys 'template' like properties due to the loss of spatial information. Although SPM has never been viewed as a template learning method, methods using gradients based features [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], [START_REF] Benenson | Pedestrian detection at 100 frames per second[END_REF], [START_REF] Dollár | Fast feature pyramids for object detection[END_REF], [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF] have Riding horse Arms bent Female Bermuda shorts Riding bike Using computer Formal suit Fig. 3. Illustrations of scoring for different images, for different attributes and actions. Note how the model s cores only the discriminative regions in the image while ignoring the non-discriminative or background regions (in black). Such spatial sparsity is particularly interesting when the discriminative information is expected to be localized in space like in the case of many human attributes and actions. been presented as such, e.g. the recent literature is full of visualizations of templates (class models) learnt with HOGlike [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] features, e.g. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], [START_REF] Pandey | Scene recognition and weakly supervised object localization with deformable part-based models[END_REF]. Both, SPM and HOG based, methods have been applied to the task of human analysis [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Khan | Coloring action recognition in still images[END_REF], where they were found to be successful. We also formulate our model in a discriminative template learning framework. However, we differ in that we learn a collection of templates instead of a single template. In the recently proposed Exemplar SVM (ESVM) work, Malisiewicz et al. [START_REF] Malisiewicz | Ensemble of Exemplar-SVMs for object detection and beyond[END_REF] propose to learn discriminative templates for each object instance of the training set independently and then combine their calibrated outputs on test images as a post-processing step. In contrast, we work at a part level and use all templates together during both training and testing. More recently, Yan et al. [START_REF] Yan | Beyond spatial pyramids: A new feature extraction framework with dense spatial sampling for image classification[END_REF] proposed a 2-level approach for image representation. Similar to our approach it involves sampling image regions, but while they vector quantize the region descriptors, we propose a mechanism to select discriminative regions and build discriminative part based models from them. Works have also been reported using features which exploit motion for recognizing and localizing human actions in videos [START_REF] Jain | Better exploiting motion for better action recognition[END_REF], [START_REF] Jain | Action localization with tubelets from motion[END_REF], [START_REF] Oneata | Efficient action localization with approximately normalized fisher vectors[END_REF], [START_REF] Wang | Action recognition with improved trajectories[END_REF], [START_REF] Laptev | Learning realistic human actions from movies[END_REF], [START_REF] Simonyan | Two-stream convolutional networks for action recognition in videos[END_REF]. Wang and Schmid [START_REF] Wang | Action recognition with improved trajectories[END_REF] use trajectories, Jain et al. use tubelets [START_REF] Jain | Action localization with tubelets from motion[END_REF] while Simonyan et al. [START_REF] Simonyan | Two-stream convolutional networks for action recognition in videos[END_REF] propose a two-stream convolutional network. Here, we are interested in human action and attribute recognition, but only from still images and hence do not have motion information. Part-based structured models Generative or discriminative part-based models (e.g. the Constellation model by Fergus et al. [START_REF] Fergus | Weakly supervised scale-invariant learning of models for visual recognition[END_REF] and the Discriminative Part-based Model (DPM) by Felzenszwalb et al. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]), have led to state-of-the-art results for objects that are rigid or, at least, have a simple and stable structure. In contrast humans involved in actions can have huge appearance variations due to appearance changes (e.g. clothes, hair style, accessories) as well as articulations and poses. Furthermore, their interaction with the context can be very complex. Probably because of the high complexity of tasks involving humans, DPM does not perform better than SPM for human action recognition as was shown by Delaitre et al. [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF]. Increasing the model complexity, e.g. by using a mixture of components [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], has been shown to be beneficial for object detection 1 . Such increase in model complexity is even more apparent in similar models for finer human analysis, e.g. pose estimation [START_REF] Desai | Detecting actions, poses, and objects with relational phraselets[END_REF], [START_REF] Yang | Articulated pose estimation with flexible mixtures-of-parts[END_REF], [START_REF] Zhu | Face detection, pose estimation, and landmark localization in the wild[END_REF], where a relatively large number of components and parts are used. Note that components account for coarse global changes in aspect/viewpoint, e.g. full body frontal image, full-body profile image, upper body frontal image and so on, whereas parts account for the local variations of the articulations, e.g. hands up or down. Supported by a systematic empirical study, Zhu et al. [START_REF] Zhu | Do we need more training data or better models for object detection?[END_REF] recently recommended the design of carefully regularized richer (with a larger number of parts and components) models. Here, we propose a richer and higher capacity model, but less structured, the Expanded Parts Model. In mixture of components models, the training images are usually assigned to a single component (see Fig. 2 for an illustration) and thus contribute to training one of the templates only. Such clustering like property limits their capability to generate novel articulations, as sub-articulation in different components cannot be combined. Such clustering and averaging are a form of regularization and involve manually setting the number of parts and components. In comparison, the proposed EPM does not enforce similar averaging, nor does it forbid it by definition. It can have a large number of parts (up to the order of the number of training images) if found necessary despite sufficient regularization. Part-based deformable models initialize the parts either with heuristics (e.g. regions with high average energy [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]) or use annotations [START_REF] Desai | Detecting actions, poses, and objects with relational phraselets[END_REF], while EPM systematically explores parts at a large number of locations, scales and atomicities and selects the ones best suited for the task. 1. See the results of different versions of the DPM software http://people.cs.uchicago.edu/∼rgb/latent/ which, along with other improvements, steadily increase the number of components and parts. Part-based loosely structured models EPM bears some similarity with Poselets by Bourdev et al. [START_REF] Bourdev | Describing people: Poseletbased attribute classification[END_REF], [START_REF]Describing people: A poselet-based approach to attribute classification[END_REF], [START_REF] Bourdev | Poselets: Body part detectors trained using 3D human pose annotations[END_REF], [START_REF] Maji | Action recognition from a distributed representation of pose and appearance[END_REF], which are compound parts consisting of multiple anatomical parts, highly clustered in 3D configuration space, e.g. head and shoulders together. Poselets vote independently for a hypothesis, and are shown to improve performance. However, they are trained separately from images annotated specifically in 3D. In contrast, EPM tries to mine out such parts, at the required atomicity, from given training images for a particular task. Fig. 6 (top right) shows some of the parts for the 'female' class which show some resemblance with poselets, though are not as clean. Methods such as Poselets and the proposed method are also conceptually comparable to the mid-level features based algorithms [START_REF] Boureau | Learning midlevel features for recognition[END_REF], [START_REF] Fathi | Action recognition by learning mid-level motion features[END_REF], [START_REF] Joo | Human attribute recognition by rich appearance dictionary[END_REF], [START_REF] Juneja | Blocks that shout: Distinctive parts for scene classification[END_REF], [START_REF] Lim | Sketch tokens: A learned mid-level representation for contour and object detection[END_REF], [START_REF] Oquab | Learning and transferring mid-level image representations using convolutional neural networks[END_REF], [START_REF] Sabzmeydani | Detecting pedestrians by learning shapelet features[END_REF], [START_REF] Singh | Unsupervised discovery of mid-level discriminative patches[END_REF], [START_REF] Sun | Learning discriminative part detectors for image classification and cosegmentation[END_REF]. While Singh et al. [START_REF] Singh | Unsupervised discovery of mid-level discriminative patches[END_REF] proposed to discover and exploit mid-level features in a supervised or semi-supervised way, with alternating between clustering and training discriminative classifiers for the clusters, Juneja et al. [START_REF] Juneja | Blocks that shout: Distinctive parts for scene classification[END_REF] proposed to learn distinctive and recurring image patches which are discriminative for classifying scene images using a seeding, expansion and selection based strategy. Lim et al. [START_REF] Lim | Sketch tokens: A learned mid-level representation for contour and object detection[END_REF] proposed to learn small sketch elements for contour and object analysis. Oquab et al. [START_REF] Oquab | Learning and transferring mid-level image representations using convolutional neural networks[END_REF] used the mid-level features learnt using CNNs to transfer information to new datasets. Boureau et al. [START_REF] Boureau | Learning midlevel features for recognition[END_REF] viewed combinations of popular coding and pooling methods as extracting mid-level features and analysed them. Sabzmeydani et al. [START_REF] Sabzmeydani | Detecting pedestrians by learning shapelet features[END_REF] proposed to learn mid level shapelets features for pedestrian detection. Yao et al. [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF] proposed to recognize human actions using bases of human attributes and parts, which can be seen as a kind of mid-level features. The proposed EPM explores the space of such mid-level features systematically under a discriminative framework and more distinctively uses only a subset of model parts for scoring cf. all model parts by the traditional methods. In a recent approach, Parizi et al. [START_REF] Parizi | Automatic discovery and optimization of parts for image classification[END_REF] propose to mine out parts using a 1 / 2 regularization with weights on parts. They alternate between learning the discriminative classifier on the pooled part response vector, and the weight vector on the parts. However, they differ from EPM as they used pooled response of all parts for an image while EPM considers absolute responses of the best subset of parts from among the collection of an over complete set of model parts. Many methods have also been proposed to reconstruct images using patches, e.g. Similarity by Composition by Boiman and Irani [START_REF] Boiman | Similarity by composition[END_REF], Implicit Shape Models by Leibe et al. [START_REF] Leibe | Robust object detection with interleaved categorization and segmentation[END_REF], Naive Bayes Nearest Neighbors (NBNN) by Boiman et al. [START_REF] Boiman | In defense of nearestneighbor based image classification[END_REF], and Collaborative Representation by Zhu et al. [START_REF] Zhu | Multi-scale patch based collaborative representation for face recognition with margin distribution optimization[END_REF]. Similarly sparse representation has been also used for action recognition in videos [START_REF] Guha | Learning sparse representations for human action recognition[END_REF]. However, while such approaches are generative and are generally based on minimizing the reconstruction error, EPM aims to mine out good patches and learn corresponding discriminative templates with the direct aim of achieving good classification. Description of humans other than actions and attributes Other forms of descriptions of humans have also been reported in the literature. E.g. pose estimation [START_REF] Andriluka | 2D human pose estimation: New benchmark and state of the art analysis[END_REF], [START_REF] Charles | Automatic and efficient human pose estimation for sign language videos[END_REF], [START_REF] Dantone | Body parts dependent joint regressors for human pose estimation in still images[END_REF], [START_REF] Fan | Combining local appearance and holistic view: Dual-source deep neural networks for human pose estimation[END_REF], [START_REF] Tompson | Joint training of a convolutional network and a graphical model for human pose estimation[END_REF], [START_REF] Toshev | DeepPose: Human pose estimation via deep neural networks[END_REF] and using pose related methods for action [START_REF] Vemulapalli | Human action recognition by representing 3D skeletons as points in a lie group[END_REF], [START_REF] Thurau | Pose primitive based human action recognition in videos or still images[END_REF], [START_REF] Chen | Describing clothing by semantic attributes[END_REF], [START_REF] Yao | Action recognition with exemplar based 2.5D graph matching[END_REF], [START_REF] Zhang | Panda: Pose aligned networks for deep attribute modeling[END_REF] and attribute [START_REF] Chen | Describing clothing by semantic attributes[END_REF] recognition have been studied in computer vision. Recognizing attributes from the faces of humans [START_REF] Bourdev | Describing people: Poseletbased attribute classification[END_REF], [START_REF] Ma | Unsupervised learning of discriminative relative visual attributes[END_REF], [START_REF] Kumar | Describable visual attributes for face verification and image search[END_REF], recognizing facial expressions [START_REF] Wang | Action recognition with improved trajectories[END_REF], [START_REF] Rudovic | Coupled gaussian processes for pose-invariant facial expression recognition[END_REF], [START_REF] Sharma | Local higher-order statistics (LHS) for texture categorization and facial analysis[END_REF], [START_REF] Wan | Spontaneous facial expression recognition: A robust metric learning approach[END_REF] and estimating age from face images [START_REF] Li | Learning ordinal discriminative features for age estimation[END_REF], [START_REF] Chang | A learning framework for age rank estimation based on face images with scattering transform[END_REF], [START_REF] Geng | Automatic age estimation based on facial aging patterns[END_REF], [START_REF] Guo | A study on automatic age estimation using a large database[END_REF], [START_REF] Guo | A study on human age estimation under facial expression changes[END_REF] have also attracted fair attention. Shao et al. [START_REF] Shao | What do you do? occupation recognition in a photo via social context[END_REF] aimed to predict the occupation of humans from images, which can be seen as a high-level attribute. In the present work, we work with full human bodies where the faces may or may not be visible and the range of poses may be unconstrained. Although some of the attributes and actions we consider here are correlated with pose, we do not attempt to solve the challenging problem of pose first and then infer the said attributes and actions. We directly model such actions and attributes from the full appearance of the human, expecting the model to make such latent factorization, implicitly within itself, if required. In addition to the works mention above, we also refer the reader to Guo and Lai [START_REF] Guo | A survey on still image based human action recognition[END_REF], for a survey of the general literature for the task of human action recognition from still images. EXPANDED PARTS MODEL APPROACH We address the problem in a supervised classification setting. We assume that a training set of images and their corresponding binary class labels, i.e. T = {(x i , y i )|x i ∈ I, y i ∈ {-1, +1}, i = 1, . . . , m} (1) are available, where I is the space of images. We intend to learn a scoring function parametrized by the model parameters Θ, s Θ : I → R, Θ ∈ M, (2) where M is a class of models (details below), which takes an image and assigns a real valued score to reflect the membership of the image to the class. In the following we abuse notation and use Θ to denote either the parameters of, or the learnt model itself. We define an Expanded Parts Model (EPM) to be a collection of discriminative templates, each with an associated scale space location. Images scoring, with EPM, is defined as aggregating the scores of the most discriminative image regions corresponding to a subset of model parts. The scoring thus (i) uses a specific subset (different for different images) of model parts and (ii) only scores the discriminative regions, instead of the whole image. We make these notions formal in the next section (Sec. 3.1). Formulation as regularized loss minimization Our model is defined as a collection of discriminative templates with associated locations, i.e. Θ ∈ M = {(w, )|w ∈ R N d , ∈ [0, 1] 4N } (3) where N ∈ N is the number of parts, d ∈ N is the dimension of the appearance descriptor, w = [w 1 , . . . , w N ], w p ∈ R d , p = 1, . . . , N (4) is the concatenation of p = 1, . . . , N part templates and = [ 1 , . . . , N ] ∈ [0, 1] 4N (5) is the concatenation of their scale-space positions, with each p specifying a bounding box, i.e. p = [x 1 , ỹ1 , x2 , ỹ2 ] ∈ [0, 1] 4 , p = 1, . . . , N (6) where x and ỹ are fractional multiples of width and height respectively. We propose to learn our model with regularized loss minimization, over the training set T , with the objective L(Θ; T ) = λ 2 ||w|| 2 2 + 1 m m i=1 max(0, 1 -y i s Θ (x i )), (7) with s Θ (•) being the scoring function (Sec. 3.2). Our objective is the same as that of linear support vector machines (SVMs) with hinge loss. The only difference is that we have replaced the linear score function, i.e. sw (x) = w x, (8) with our scoring function. The free parameter λ ∈ R sets the trade-off between model regularization and the loss minimization as in the traditional SVM algorithm. Scoring function We define the scoring function as s Θ (x) = max α 1 α 0 N p=1 α p w p f (x, p ) (9a) s.t. α 0 = k (9b) O v (α, ) ≤ β, (9c) where, w p ∈ R d is the template of part p and f (x, p ) is the feature extraction function which calculates the appearance descriptor of the image x, for the patch specified by p , α = [α 1 , . . . , α N ] ∈ {0, 1} N (10) are the binary coefficients which specify if a model part is used to score the image or not, O v (α, ) measures the extent of overlap between the parts selected to score the image. The 0 norm constraint on α enforces the use of k parts for scoring while the second constraint encourages coverage in reconstruction by limiting high overlaps. k ∈ N and β ∈ R are free parameters of the model. Intuitively what the score function does is that it uses each model part w p to score the corresponding region p in the image x and then selects k parts to maximize the average score, while constraining the overlap measure between the parts to be less than a fixed threshold β. Our scoring function is inspired by the methods of (i) image scoring with learnt discriminative templates, e.g. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], [START_REF] Hussain | Feature sets and dimensionality reduction for visual object detection[END_REF] and (ii) those of learnt patch dictionary based image reconstruction [START_REF] Mairal | Online learning for matrix factorization and sparse coding[END_REF]. We are motivated by these two principles in the following way. First, by incorporating latent variables, which effectively amount to a choice of the template(s) that is (are) being used for the current image, the full-scoring function can be made nonlinear (piecewise linear, to be more precise) while keeping the interaction with each template as linear. This allows learning of more complex and nonlinear models, especially in an Expectation Maximization (EM) type algorithm, where algorithms to learn linear templates can be used once the latent variables are fixed, e.g. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], [START_REF] Hussain | Feature sets and dimensionality reduction for visual object detection[END_REF]. Second, similar to the learnt patch dictionary-based reconstruction, we want to have a spatially distributed representation of the image content, albeit in a discriminative sense, where image regions are treated independently instead of working with a monolithic global model. With a discriminative perspective, we would only like to score promising regions, and use only a subset of model parts, in the images and ignore the background or non-discriminative parts. Exploiting this could be quite beneficial especially as the discriminative information for human actions and attributes is often localized in space, i.e. for 'riding horse' only the rider and the horse are discriminative and not the background and for 'wearing shorts' only the lower part of the (person centric) image is important. In addition, the model could be over-complete and store information about the same part at different resolutions, which could lead to possible over-counting, i.e. scoring same image region multiple times with different but related model parts, as well; not forcing the use of all model parts can help avoid this over-counting. Hence, we design the scoring function to score the images with the model parts which are most capable of explaining the possible presence of the class in the image, while (i) using only a subset of relevant parts from the set of all model parts and (ii) penalizing high overlap of parts used, to exploit localization and avoid over-counting as discussed above. We aim, thus, to score the image content only partially (in space) with the most important parts only. We confirm such behavior of the model with qualitative results in Sec. 4.2. Solving the optimization problem We propose to solve the model optimization problem using stochastic gradient descent. We use the stochastic approximation to the sub-gradient w.r.t. w given by, ∇ w L = λw -δ i 1 α 0    α 1 f (x, 1 ) . . . α N f (x, N )    (11) where, α p are obtained by solving Eq. 9 and δ i = 1 if y i s Θ (x) < 1 0 otherwise. ( 12 ) Alg. 1 gives the pseudo-code for our learning algorithm. The algorithm proceeds by scoring (and thus calculating the α for) the current example with w fixed, and then updating w with α fixed, like in a traditional EM like method. The scoring function is a constrained binary linear program which is NP-hard. Continuous relaxations is a popular way of handling such optimizations, i.e. relax the α i to be real in the interval [0, 1] and replace α 0 with α 1 , and then solve the resulting continuous constrained linear program and obtain the binary values by thresholding/rounding the continuous optimum obtained. However, managing the overlap constraint with continuously selected parts would require additional thought. We instead, decide to take a simpler and direct route via an approximate greedy approach. Starting with an empty set of selected parts, we greedily add to it the best scoring part which for all (x i , y i ) ∈ S do 9: Solve Eq. 9 to get s Θ (x i ) and α 10: if iter = 5 do η ← η/5 end if 17: end for does not overlap appreciably with all the currently selected parts, for the current image. The overlap is measured using intersection over union [START_REF] Everingham | The PASCAL Visual Object Classes Challenge 2011[END_REF] and two parts are considered to overlap significantly with each other if their intersection over union is more than 1/3. During training we have an additional constraint on scoring, i.e. α J ≤ 1, where J ∈ {0, 1} N ×m with J(p, q) = 1 if p th part was sampled from the q th training image, 0 otherwise. The constraint is enforced by ignoring all the parts that were initialized from the training images of the currently selected parts. This δ i ← binarize(y i s Θ (x i ) < 1) 11: w ← w(1 -η yi λ) + δ i y i η yi α p f (x i , p Mining discriminative parts One of our main intentions is to address important limitations of the current methods: automatically selecting the task-specific discriminative parts at the appropriate scale space locations. The search space for finding such parts is very high, as all possible regions in the training images are potential candidates to be discriminative model parts. We address part mining by two major steps. First, we resort to randomization for generating the initial pool of candidate model parts. We randomly sample part candidates from all the training images, to initialize a highly redundant model. Second, we mine out the discriminative parts from this set by successive pruning. With our learning set in a stochastic 2. [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF] achieve the same effect by biased sampling from the two classes. paradigm, we proceed as follows. We first perform a certain number of passes over randomly shuffled training images and keep track of the parts used while updating them to learn the model (recall that not all parts are used to score images and, hence, potentially not all parts in the model, especially when it is highly redundant initially, will be used to score all the training images). We then note that the parts which are not used by any image will only be updated due to the regularization term and will finally get very small weights. We accelerate this shrinking process, and hence the learning process, by pruning them. Such parts are expected to be either redundant or just non-discriminative background; empirically we found that to be the case; Fig. 4 shows some examples of the kind of discriminative parts, at multiple atomicities, that were retained by the model (for 'riding a bike' class) while also some redundant parts as well as background parts which were discarded by the algorithm. Relation with latent SVM Our Expanded Parts Model learning formulation is similar to a latent support vector machine (LSVM) formulation, which optimizes (assuming a hinge loss function) L(w; T ) = λ 2 ||w|| 2 2 + 1 m m i=1 max(0, 1 -y i s L (x i )), (13) where the scoring function is given as s L (x) = max z w g(x, z), (14) with z being the latent variable (e.g. part deformations in Deformable Parts-based Model (DPM) [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]) and g(•), the feature extraction function. The α, in our score function Eq. 9, can be seen as the latent variable (one for each image). Consequently, the EPM can be seen as a latent SVM similar to the recently proposed model for object detection by Felzenszwalb et al. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]. In such latent SVM models the objective function is semiconvex [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF], i.e. it is convex for the negative examples. Such semi-convexity follows from the convexity of scoring function, with similar arguments as in Felzenszwalb et al. (Sec. 4 in [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF]). The scoring function is a max over functions which are all linear in w, and hence is convex in w which in turn makes the objective function semi-convex. Optimizing while exploiting semi-convexity gives guarantees that the value of the objective function will either decrease or stay the same with each update. In the present case, we do not follow Felzenszwalb et al. [START_REF] Felzenszwalb | Object detection with discriminatively trained part based models[END_REF] in training, i.e. we do not exploit semi-convexity as in practice we did not observe a significant benefit in doing so. Despite there being no theoretical guarantee of convergence, we observed that, if the learning rate is not aggressive, training as proposed leads to good convergence and performance.Fig. 5 shows a typical case demonstrating the convergence of our algorithm, it gives the value of the objective function, the evolution of the model, in terms of number of parts, and the performance of the system vs. iterations (Step 4, Alg. 1), for 'interacting with a computer' class of the Willow Actions dataset. Appearance features and visualization of scoring As discussed previously, HOG features are not well adapted to human action recognition. We therefore resort, in our approach, to using appearance features, i.e. the bag-of-features (BoF), for EPM. When we use such appearance representation, the so-obtained discriminative models (similar to [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF]) cannot be called templates (cf. HOG based templates [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]). Thus, in the following, we use the word template to loosely denote the similar concept in the appearance descriptor space. Note, however, that the proposed method is featureagnostic and can be potentially used with any arbitrary appearance descriptor, e.g. BoF [START_REF] Csurka | Visual categorization with bags of keypoints[END_REF], [START_REF] Sivic | Video Google: A text retrieval approach to object matching in videos[END_REF], HOG [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], GIST [START_REF] Oliva | Modeling the shape of the scene: A holistic representation of the spatial envelope[END_REF], CNN [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF] etc. Since we initialize our parts with the appearance descriptors (like BoF) of patches from training images (see Sec. 4 for details), we can use the initial patches to visualize the scoring instead of the final learnt templates as in the HOG case. This is clearly a loose association as the initial patches evolve with training iterations to give the part templates w p . However we hope that the appearance of the initial patch will suffice as a proxy for visualizing the part. We found such an approximate strategy to give reasonable visualizations, e.g. Fig. 3 shows some visualizations of scoring for different classes. While the averaging is not very good, the visualizations do give an approximate indication of which kind of image regions are scored and by which kinds of parts. We discuss these more in the qualitative results Sec. 4.2. Efficient computation using integral histograms Since we work with a large number of initial model parts, e.g. O(10 5 ), the implementation of how such parts are used to score the images becomes an important algorithmic design aspect. In the naïve approach, the scoring will require computing features for N local regions corresponding to each of the model part. Since N can be very large for the initial over-complete models, this is intractable. To circumvent this we use integral histograms [START_REF] Porikli | Integral histogram: A fast way to extract histograms in cartesian spaces[END_REF], i.e. 3D data structure where we keep integral images corresponding to each dimension of the appearance feature. The concept was initially introduced by Crow [START_REF] Crow | Summed-area tables for texture mapping[END_REF] as summed area tables for texture mapping. It has had a lot of successful applications in computer vision as well [START_REF] Viola | Robust real-time object detection[END_REF], [START_REF] Bay | SURF: Speeded up robust features[END_REF], [START_REF] Veksler | Fast variable window for stereo correspondence using integral images[END_REF], [START_REF] Adam | Robust fragments-based tracking using the integral histogram[END_REF]. We divide the images with axis aligned regular grid containing rectangular non-overlapping cells. Denote the location of the lattice points of the grid by X g = {x g 1 , . . . , x g s }, Y g = {y g 1 , . . . , y g t }, where, x g , y g ∈ [0, 1] are fractional multiples of width and height, respectively. We compute the BoF histograms for image regions from (0, 0) to each of lattice points (x i , y j ), i.e. we compute the feature tensor F x ∈ R s×t×d , for each image x, where the d dimensional vector corresponding to F (i, j, :) is the corresponding un-normalized BoF vector. When we do random sampling to get candidate parts to initialize the model (details in Sec. 4), we align the parts to the grid, i.e. p = [x 1 , ỹ1 , x2 , ỹ2 ], s.t. x1 = x g i , ỹ1 = y g j , x2 = x g k , ỹ2 = y g l , ∀ some i, k ∈ {1, . . . , s} and j, l ∈ {1, . . . , t}. Hence, to score an image with a part we can efficiently compute the feature for the corresponding location as, f (x, p ) =F x (x g k , y g l , :) + F x (x g i , y g j , :) -F x (x g i , y g l , :) -F x (x g k , y g j , :). f (x, p ) is then normalized appropriately before computing the score by a dot product with w p . In this way we do not need to compute the features from scratch, for all regions corresponding to the model parts every time an image needs to be scored. Also, this way we need to cache a fixed amount of data, i.e. tensor F x for every image x. EXPERIMENTAL RESULTS We now present the empirical results of the different experiments we did to validate and analyze the proposed method. We first give the statistics of the datasets then give implementation details of our approach as well as our baseline and, finally, proceed to present and discuss our results on the three datasets. The datasets. We validate and empirically analyze our method on three challenging publicly available datasets: 1) Willow 7 Human Actions with the train and validation sets and the performance is reported on the test set. 3) Stanford 40 Human Actions 5 [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF] is a dataset of human actions with 40 diverse daily human actions, e.g. brushing teeth, cleaning the floor, reading books, throwing a frisbee. It has 180 to 300 images per class with a total of 9352 images. We used the suggested train and test split provided by the authors on the website, with 100 images per class for training and the rest for testing. All images are human-centered, i.e. the human is assumed to be correctly detected by a previous stage of the pipeline. On all the three datasets, the performance is evaluated with average precision (AP) for each class and the mean average precision (mAP) over all classes. BoF features and baseline. Like previous work [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF], [START_REF] Yao | Combining randomization and discrimination for fine-grained image categorization[END_REF] we densely sample grayscale SIFT features at multiple scales. We use a fixed step size of 4 pixels and use square patch sizes ranging from 8 to 40 pixels. We learn a vocabulary of size 1000 using k-means and assign the SIFT features to the nearest codebook vector (hard assignment). We use the VLFeat library [START_REF] Vedaldi | VLFeat: An open and portable library of computer vision algorithms[END_REF] for SIFT and k-means computation. We use a four-level spatial pyramid with C = {c × c|c = 1, 2, 3, 4} cells [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] as a baseline. To have non-linearity we use explicit feature map [START_REF] Vedaldi | Efficient additive kernels using explicit feature maps[END_REF] with the BoF features. We use a map corresponding to the Bhattacharyya kernel, i.e. we take dimension-wise square roots of our 1 normalized BoF histograms obtaining 2 normalized vectors, which we use with the baseline as well as with our algorithm. The baseline results are obtained with the liblinear [START_REF] Fan | LIBLINEAR: A library for large linear classification[END_REF] library. Context. The immediate context around the person, which might contain partially an associated object (e.g. horse in riding horse) and/or correlated background (e.g. grass in running), has been shown to be beneficial for the task [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], [START_REF] Sharma | Discriminative spatial saliency for image classification[END_REF]. To include immediate context we expand the human bounding boxes by 50% in both width and height. The context from the full image has also been shown to be important [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF]. To use it with our method, we add the scores from a classifier trained on full images to scores from our method. The full image classifier uses a 4 level SPM with an exponential χ 2 kernel. Initialization and regularization constant. In the initialization we intend to generate a large number of part can- w p = 2f (x, p ) 1 , p = 1, . . . , N (16) where x denotes a BoF histogram. Throughout our method, we append 1 at the end of all our BoF features to account for the bias term (cf. SVM, e.g. [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF]). This leads to a score of 1 when a perfect match occurs, w T p f (x, p ) 1 = [2f (x, p ), 1] f (x, p ) -1 = 1, (17) and a score of -1 in the opposite case, as the appearance features are 2 -normalized. For the learning rate, we follow recent work [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF] and fix a learning rate which we reduce once for annealing by a factor of 5 halfway through the iterations (Step 15, Algorithm 1). We follow [START_REF] Perronnin | Towards good practice in large-scale learning for image classification[END_REF] and fix the regularization constant λ = 10 -5 . Deep CNN features. Recently, deep Convolutional Neural Networks (CNN) have been very successful, e.g. for image classification [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF] and object detection [START_REF] Szegedy | Going deeper with convolutions[END_REF], [START_REF] Sermanet | Pedestrian detection with unsupervised multi-stage feature learning[END_REF], [START_REF] Girshick | Rich feature hierarchies for accurate object detection and semantic segmentation[END_REF], and have been applied for human action recognition in videos [START_REF] Ji | 3D convolutional neural networks for human action recognition[END_REF]. Following such works, we also evaluated the performances of using the recent highly successful deep Convolutional Neural Networks architectures for image classification [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF], [START_REF] Simonyan | Very deep convolutional networks for large-scale image recognition[END_REF]. Such networks are trained on large external image classification datasets such as the Imagenet dataset [START_REF] Deng | Imagenet: A large-scale hierarchical image database[END_REF] and have been shown to be successful with a large variety of computer vision tasks [START_REF] Razavian | Cnn features off-the-shelf: an astounding baseline for recognition[END_REF]. We used the publicly available matconvnet library [START_REF] Vedaldi | Matconvnet -convolutional neural networks for matlab[END_REF] and the models, pre-trained on the Imagenet dataset, corresponding to the network architectures proposed by Krizhevsky et al. [START_REF] Krizhevsky | Imagenet classification with deep convolutional neural networks[END_REF] (denoted AlexNet) and by Simonyan and Zisserman [100] (16 layer network; denoted VGG-16). Quantitative results Tab. 1 shows the results of the proposed Expanded Parts Model (EPM) (with and without context) along with our implementation of the baseline Spatial Pyramid [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] (SPM) and some competing methods using similar features, on the Willow 7 Actions dataset. We achieve a mAP of 66% which goes up to 67.6% by adding the full image context. We perform better than the current state-of-the-art method [5] (with similar features) on this dataset on five out of seven classes and on average. As demonstrated by [START_REF] Delaitre | Recognizing human actions in still images: A study of bag-of-features and part-based representations[END_REF], full image context plays an important role in this dataset. It is interesting to note that even without context, we achieve 3.5% absolute improvement compared to a method which models person-object interactions [START_REF] Delaitre | Learning person-object interactions for action recognition in still images[END_REF] and uses extra data to train detectors. The second last column in Tab. 2 (upper part) shows our results, with bag-of-features based representations, along with results of the baseline SPM and other methods, on the Stanford 40 Actions. EPM performs better than the baseline by 5.8% (absolute) at 40.7%. It also performs better than Object bank [START_REF] Li | Object bank: A high-level image representation for scene classification and semantic feature sparsification[END_REF] and Locality-constrained linear coding [START_REF] Wang | Localityconstrained linear coding for image classification[END_REF] (as reported in [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF]) by 8.2% and 5.5% respectively. With context, EPM achieves 42.2% mAP which is the state-ofthe-art result using no external training data and grayscale features only. Yao et al. [START_REF] Yao | Action recognition by learning bases of action attributes and parts[END_REF] reported higher performance on this dataset (45.7%), by performing action recognition using bases of attributes, objects and poses. To derive their bases they use pre-trained systems for 81 objects, 45 attributes and 150 poselets, using large amount (comparable to the size of the dataset) of external data. Since they use human based attributes also, arguably, EPM can be used to improve their generic classifiers and improve performance further, i.e. EPM is complementary to theirs. Khan et al. [START_REF] Khan | Coloring action recognition in still images[END_REF] also report higher (51.9%) performance on the dataset fusing multiple features, particularly those based on color, while here we have used only grayscale information. The last column in Tab. 2 (upper part) shows ours as well as other results, with bag-of-features based representations, on the dataset of Human Attributes. Our baseline SPM is already higher than the results reported by the dataset creators [START_REF] Sharma | Learning discriminative representation for image classification[END_REF], because we use denser SIFT and more scales. EPM improves over the baseline by 3.2% (absolute) and increases further by 1% when adding the full image context. EPM (alone, without context) outperforms the baseline for 24 out of the 27 attributes. Among the different human attributes, those based on pose (e.g. standing, arms bent, running/walking) are found to be easier than those based on appearance of clothes (e.g. short skirt, bermuda shorts). The range of performance obtained with EPM is quite wide, from 24% for crouching to 98% for standing. Tab. 2 (bottom part) shows the results of the CNN features, on the person bounding box and the whole image, as well as their combinations with EPM (averaging of the scores of combined methods), on the two larger datasets, i.e. Stanford 40 Actions and Human Attributes. We can make the following interesting observations from Tab. 2 [START_REF] Lazebnik | Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[END_REF] 34.9 55.5 Object bank [START_REF] Li | Object bank: A high-level image representation for scene classification and semantic feature sparsification[END_REF] full image 32.5 -LLC coding [START_REF] Wang | Localityconstrained linear coding for image classification[END_REF] bb + full img As deep features are not additive like bag-of-features histograms (feature for two image regions together is not the sum of features for each separately) we can't use the integral histograms based efficient implementation with the deep features and computing and caching features for all candidate parts is prohibitive. Hence, we can't use the deep features out-of-the-box with our method. Tailoring EPM for use with deep architectures is an interesting extension but is out of scope of the present work. Qualitative results We present qualitative results to illustrate the scoring, Fig. 3 shows some examples, i.e. composite images created by averaging the part patches with non-zero alphas. We can observe that the method focuses on the relevant parts, such as torso and arms for 'bent arms', shorts and tee-shirts for 'wearing bermuda shorts', and even computer (left bottom) for 'using computer'. Interestingly, we observe that for both 'riding horse' and 'riding bike' classes, the person gets ignored but the hair and helmet have been used partially for scoring. We explain this with the discriminative nature of the learnt models: as people in similar pose might confuse the two classes, the models ignore it and focus on other more discriminative aspects. The parts mined by the model Fig. 6 shows the distribution of the 2 norm of the learnt part templates, along with top scoring patches for the selected parts, with norms across the spectrum for three classes. The first image in any row is the patch with which the part was initialized and the remaining ones are its top scoring patches. The top scoring patches give an idea of what kind of appearances the learnt templates w p capture. We observe that, across datasets, while most of the parts seem interpretable, e.g. face, head, arms, horse saddle, legs, there are a few parts that seem to correspond to random background (e.g. row 1 for 'climbing'). This is in line with a recent study [START_REF] Zhu | Do we need more training data or better models for object detection?[END_REF], in 'mixture of template' like formulations, there are clean interpretable templates along with noisy templates which correspond to background. We also observe that the distribution of the 2 norm of the parts follows a heavy tailed distribution. Some parts are very frequent and the system tries to tune them to give high scores for positive vectors and low scores for negative vectors and hence give them a high overall energy. There are also parts which have smaller norms, either because they are consistent in appearance (like the head and partial shoulders on clean backgrounds in row 4 of 'female' Fig. 6, or the leg/arm in the last row of 'climbing') or occur in few images. However, they are discriminative nonetheless. We observe that the model sizes and the performances for the classes are correlated. On the Stanford Actions dataset, which has the same number of training images for every class, on an average, class models with a higher number of parts obtain higher performance (correlation coefficient between number of parts and performances of 0.47). This is somewhat counter intuitive as we would expect that the model with larger number of parts, and hence larger number of parameters/higher capacity, would over-fit cf. those with smaller number of parts, for the same amount of training data for both cases. However, this can be explained as follows. The classes where there are large amounts of variations which are well captured by the train set, the model admits larger number of parts to explain the variations and then successfully generalizes to the test set. While for classes where the train set captures only a limited amount of variation, the model fits on the train set with a smaller number of parts but is then unable to generalize well to the test set with different variations. An intuitive feeling of such variations can be had by noting the classes which are relatively well predicted, e.g. 'climbing', 'riding a horse', 'holding an umbrella', vs. those that are not so well predicted, e.g. 'texting message', 'waving hands', 'drinking' -while the former classes are expected to have more visual coherence, the latter are expected to be reatively more visually varied. Similar correlation of the number of model parts with performances (Fig. 8 middle) is also observed for Human Attributes dataset (albeit weaker with correlation coefficient 0.23). Since Human Attributes dataset has different number of images for different classes, it allows us to make the following interesting observation as well. The performances for Human Attributes dataset are highly correlated with the number of training images (correlation coefficient 0.79), which is explained simply as the classes with higher number of images have higher chance performance, and the classifiers are accordingly better in absolute performance. However, the relationship between the number of training images and the model parts is close to exponential (correlation coefficient between the log of number of training images and the number of model parts 0.65). This is interesting as it is in line with the heavy tailed nature of visual information -as the number of images increase the model expands to capture the visual variability quickly initially but as the training data increases further the model only expands when it encounters rarer visual information and hence the growth decreases. The three clear outliers where the increase in training images does not lead to a increase in model size (after a limit) are 'upperbody', 'standing', 'arms bent' -these classes are also the best-performing classes; they have relatively high number of training images but still do not need many model parts as they are limited in their (discriminative) visual variations. Effect of parameters There are two important parameters in the proposed algorithm, first, the number of parts used to score the images k and, second, the number of candidate parts to be sampled for initializing the model n (per training image). To investigate the behavior of the method w.r.t. these two parameters, we did experiments on the validation set of the Willow Actions dataset. Fig. 7 shows the performances and the model sizes (number of parts in the final models) when varying these two parameters in the range {20, 50, 100, 150, 200}. We observe that the average number of model parts increases rapidly as k is increased (Fig. 7 middle-top). This is expected to a certain extent, as the pruning of the model parts is dependent on k; if k is large then a larger number of parts are used per image while training, and hence more parts will be used, on an average, and consequently survive pruning. However, the increase in the model size is not accompanied by a similarly aggressive increase in the validation performance (Fig. 7 left-top). The average number of model parts for k = 100 and n = 200 is 549. Similar increase in the model size but with increase in n is more varied for different values of k; for lower value of say k = 20 the increase in model size with n is subtle when compared to the same for a higher value of say k = 200. However, again such increase in model size doesn't bring increase in validation performance either. It is also interesting to note the behavior of the models of different classes when varying k and n. The bar graphs on the right of Fig. 7 show the number of model parts when n is fixed to 200 and k is varied (top) and when k is fixed to 100 and n is varied. In general, as k was increased the models of almost all the classes grew in number of parts with n fixed, while when k was fixed and more model parts were made available, the models first grew and then saturated. The only exception to this was the 'playing music' class where the models practically saturated in both cases, perhaps because of limited appearance variations. The growing of models with increasing k was followed by a slight drop in the performance, probably due to over-fitting. Following these experiments and also for keeping a reasonable computational complexity, k was fixed to k = 100 for the experiments reported. This is also comparable to the 85 cells in the four-level spatial pyramid representation used as a baseline. Similarly, n was fixed to be n = 200 for the Willow Actions dataset and n = 20 for the about 10× larger Stanford Action and Human Attributes datasets (recall that n is the number of initial candidates parts sampled per training image) in the experiments reported. Training/testing times The training is significantly slower compared to a standard SPM/SVM baseline, i.e. by around two orders of magnitude. This is due to the fact that there is SVM equivalent cost (with a larger number of vectors) at each iteration. Testing is also a bit slower compared to an SPM, as it is based on a dot product between longer vectors. For example, on Stanford dataset testing is 5 times slower compared to SPM at about 35 milliseconds per image (excluding feature extraction). CONCLUSION We have presented a new Expanded Parts Model (EPM) for human analysis. The model learns a collection of discriminative templates which can appear at specific scale-space positions. It scores a new image by sparsely explaining only the discriminative regions in the images while using only a subset of the model parts. We proposed a stochastic sub-gradient based learning method which is efficient and scalable -in the largest of our experiments we mine models of O( 103 ) parts from among initial candidate sets of O(10 5 ). We validated our method on three challenging publicly available datasets for human attributes and actions. We also showed complementary nature of the proposed method to the current state-of-the-art deep Convolutional Neural Networks based features. Apart from obtaining good quantitative results, we analysed the nature of the parts obtained and also analysed the growth of the model size with the complexity of the visual task as well as the amount of training data available. Fig. 1 . 1 Fig. 1. Illustration of the proposed method. During training (left) discriminative templates are learnt from a large pool of randomly sampled part candidates. During testing (right), the most relevant parts are used to score the test image. Fig. 2 . 2 Fig. 2. Illustration of a two-component model vs. the proposed Expanded Parts Model. In a component-based model (left) each training image contributes to the training of a single model and, thus, its parts only score similar images. In contrast, the proposed EPM automatically mines discriminative parts from all images and uses all parts during testing. Also, while for component-based models, only images with typical training variations can be scored reliably, in the proposed EPM sub-articulations can be combined and score untypical variations not seen during training. Algorithm 1 6 : 16 SGD for learning Expanded Parts Model (EPM) 1: Input: Training set T = {(x i , y i )} m i=1 ; denote m + (m -) as number of positive (negative) examples 2: Returns: Learned Expanded Parts Model, Θ = (w, ) 3: Initialize: Θ = (w, ), rate (η 0 ), number of parts for scoring (k) and regularization constant (λ) 4: for iter = 1, . . . , 10 do 5: η +1 ← η 0 × m -/m and η -1 ← η 0 × m + /m for npass = 1, . . . , 5 do 7: S ← rand shuffle(T ) 8: parts_image_map ← note image parts (Θ, S) 15:M ← prune parts (Θ, parts_image_map) 16: increases the diversity of learned parts, by discouraging similar or correlated parts (which emerge from the same training image initially) to score the current image. While training, we score each training image from the rest of the train set, i.e. we do not use the model parts which were generated from the same training image to avoid obvious trivial part selection. Usually, large databases are highly unbalanced, i.e. they have many more negative examples than positive examples (of the order of 50:1). To handle this we use asymmetric learning rates proportional to the number of examples of other class 2 (Step 4, Alg. 1). Fig. 4 . 4 Fig. 4. Example patches illustrating pruning for the 'riding a bike' class. While discriminative patches (top) at multiple atomicities are retained by the system, redundant or non-discriminative patches (middle) and random background (bottom) patches are discarded. The patches have been resized and contrast adjusted, for better visualization. Fig. 5 . 5 Fig. 5. The evolution of the (left) objective value, (middle) number of model parts along with the (right) average precision vs. number of iterations, for the validation set of 'interacting with a computer' class of the Willow Actions dataset, demonstrating the convergence of our algorithm. Fig. 6 . 6 Fig. 6. Distribution of the norm of the part templates (left top) and some example 'parts' (rest three). Each row illustrates one of the parts: the first image is the patch used to initialize the part and the remaining images are its top scoring patches. We show, for each class, parts with different norms (color coded) of the corresponding wp vectors, higher (lower) norm part at top (bottom). (see Sec. 4.3 for a discussion, best viewed in color). Fig. 8 ( 8 left and middle) shows the relation between the performances and the number of model parts, for the different classes of the larger Stanford Actions and Human Attributes datasets. The right plot gives the number of training images vs. the number of model parts for the different classes of the Human Attributes dataset (such curve is not plotted for the Stanford Actions dataset as it has the same number of training images for each class). Fig. 7 . 7 Fig. 7. Experiments to evaluate the impact of the number of parts and the number of initial candidate parts on the performance of the proposed model on the validation set of the Willow Actions dataset (see Tab. 1 for the full class names). The first row shows the performances and number of model parts for different values of k, i.e. the maximum number of model parts used to score a test image, while the second row shows those for varying n, i.e. the number of initial part candidates sampled per training image. Fig. 8 . 8 Fig. 8. The average precision obtained by the models for (left) Stanford Actions, (middle) HAT dataset and (right) the number of training images (for HAT; the number of training images for Stanford Actions dataset is same for all classes) vs. the number of parts in the final trained models of the different classes (see Sec. 4.3 for discussion). , which are subsequently refined by pruning. To achieve this, we randomly sample the positive training images for patch positions, i.e. { p } and initialize our model parts as 5. http://vision.stanford.edu/Datasets/40actions.html didates TABLE 1 1 Performances (mAP) on the Willow Actions dataset Class [28] [8] [5] [21] EPM EPM+C intr. w/ comp. 30.2 56.6 59.7 59.7 60.8 64.5 photographing 28.1 37.5 42.6 42.7 40.5 40.9 playing music 56.3 72.0 74.6 69.8 71.6 75.0 riding bike 68.7 90.4 87.8 89.8 90.7 91.0 riding horse 60.1 75.0 84.2 83.3 87.8 87.6 running 52.0 59.7 56.1 47.0 54.2 55.0 walking 56.0 57.6 56.5 53.3 56.2 59.2 mean 50.2 64.1 65.9 63.7 66.0 67.6 TABLE 2 2 Performances (mAP) of EPM and deep Convolutional Neural Networks on the Stanford 40 Actions and the Human Attributes datasets Method Image region Stan40 HAT Discr. Spatial Repr. [4] - 53.8 Appearance dict. [50] bounding box - 59.3 SPM (baseline) ACKNOWLDGEMENTS This work was partly realized as part of the Quaero Programme, funded by OSEO, French State agency for innovation, by the ANR (grant reference ANR-2010-CORD-103-06) and by the ERC advanced grant ALLEGRO.
73,306
[ "972001", "3233", "831154" ]
[ "54489", "406734", "445108" ]